Progetto Machine Learning

Di Gabriel Mendonca Gomes e Alexandro Di Nicola

Introduzione

Questo progetto consiste nell'analizzare i dati di uno e più dataset contenenti i dati del cambio Euro-Dollaro dal 2005 al 2020. L'obiettivo è quello di creare un modello di Machine Learning che sia in grado di prevedere il valore del cambio Euro-Dollaro in base ai dati storici e in base alle parole all'interno di news riportate.

Importazione librerie

In [1]:
import pandas as pd
import numpy as np
import seaborn as sns
import datetime as dt

import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge, RidgeCV, Lasso, LassoCV
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.ensemble import RandomForestRegressor
import statsmodels.formula.api as smf
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report, confusion_matrix
import plotly.graph_objects as go
import matplotlib.dates as mdates
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import PolynomialFeatures
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
In [2]:
# Read data into a DataFrame
dataMinute = pd.read_csv('./eurusd_minute.csv')
dataHour = pd.read_csv('./eurusd_hour.csv')
dataNews = pd.read_csv('./eurusd_news.csv')

Dataset forniti

I dataset forniti sono 3:

  • eurusd_minute.csv - dati aggiornati ogni minuto
  • eurusd_hour.csv - dati aggiornati ogni ora
  • eurusd_news.csv - dati contenente le news

Dataset orario

Per capire meglio i valori che abbiamo nel dataset, vediamo un grafico con i valori di apertura, chiusura, massimo e minimo

In [3]:
fig = go.Figure(data=[go.Candlestick(x=dataHour['Date'],
                                     open=dataHour['BO'], high=dataHour['BH'],
                                     low=dataHour['BL'], close=dataHour['BC'],
                                     increasing_line_color= 'green', decreasing_line_color= 'red')])
fig.update_yaxes(fixedrange=False)
fig.show()
In [4]:
dataHour['Date'] = dataHour['Date'].astype(str)
dataHour['Time'] = dataHour['Time'].astype(str)

dataHour['Date'] = dataHour['Date'] + ' ' + dataHour['Time']
dataHour = dataHour.drop(columns=["Time"])
dataHour['Date'] = pd.to_datetime(dataHour['Date'], format='%Y-%m-%d %H:%M')

dataMinute['Date'] = dataMinute['Date'].astype(str)
dataMinute['Time'] = dataMinute['Time'].astype(str)

dataMinute['Date'] = dataMinute['Date'] + ' ' + dataMinute['Time']
dataMinute = dataMinute.drop(columns=["Time"])
dataMinute['Date'] = pd.to_datetime(dataMinute['Date'], format='%Y-%m-%d %H:%M')
print(dataHour.head(15))
                  Date       BO       BH       BL       BC      BCh       AO  \
0  2005-05-02 00:00:00  1.28520  1.28520  1.28400  1.28440 -0.00080  1.28540   
1  2005-05-02 01:00:00  1.28440  1.28480  1.28390  1.28420 -0.00020  1.28460   
2  2005-05-02 02:00:00  1.28430  1.28540  1.28410  1.28510  0.00080  1.28450   
3  2005-05-02 03:00:00  1.28510  1.28590  1.28500  1.28510  0.00000  1.28530   
4  2005-05-02 04:00:00  1.28520  1.28590  1.28490  1.28550  0.00030  1.28540   
5  2005-05-02 05:00:00  1.28540  1.28580  1.28530  1.28540  0.00000  1.28560   
6  2005-05-02 06:00:00  1.28540  1.28600  1.28520  1.28585  0.00045  1.28560   
7  2005-05-02 07:00:00  1.28585  1.28605  1.28515  1.28555 -0.00030  1.28600   
8  2005-05-02 08:00:00  1.28555  1.28675  1.28555  1.28640  0.00085  1.28570   
9  2005-05-02 09:00:00  1.28640  1.28680  1.28620  1.28680  0.00040  1.28655   
10 2005-05-02 10:00:00  1.28670  1.28740  1.28650  1.28715  0.00045  1.28685   
11 2005-05-02 11:00:00  1.28725  1.28735  1.28580  1.28600 -0.00125  1.28740   
12 2005-05-02 12:00:00  1.28600  1.28720  1.28580  1.28590 -0.00010  1.28615   
13 2005-05-02 13:00:00  1.28580  1.28656  1.28570  1.28620  0.00040  1.28595   
14 2005-05-02 14:00:00  1.28620  1.28680  1.28367  1.28367 -0.00253  1.28700   

         AH       AL       AC      ACh  
0   1.28540  1.28420  1.28460 -0.00080  
1   1.28500  1.28410  1.28440 -0.00020  
2   1.28560  1.28430  1.28530  0.00080  
3   1.28610  1.28520  1.28530  0.00000  
4   1.28610  1.28510  1.28570  0.00030  
5   1.28600  1.28550  1.28560  0.00000  
6   1.28620  1.28540  1.28600  0.00040  
7   1.28620  1.28530  1.28570 -0.00030  
8   1.28690  1.28570  1.28655  0.00085  
9   1.28695  1.28635  1.28695  0.00040  
10  1.28755  1.28665  1.28730  0.00045  
11  1.28750  1.28595  1.28615 -0.00125  
12  1.28735  1.28595  1.28605 -0.00010  
13  1.28700  1.28585  1.28700  0.00105  
14  1.28700  1.28387  1.28387 -0.00313  

Aggiungiamo le colonne con i valori successivi, per poter fare il training e prevedere i valori futuri

In [5]:
dataHour['NextBC'] = dataHour['BC'].shift(-1) # aggiunge alla riga corrente il valore della riga successiva
dataHour['Next4BC'] = dataHour['BC'].shift(-4) # aggiunge alla riga corrente il valore di 4 righe dopo
dataHour['Next12BC'] = dataHour['BC'].shift(-12) # aggiunge alla riga corrente il valore di 12 righe dopo
dataHour['Next24BC'] = dataHour['BC'].shift(-24) # aggiunge alla riga corrente il valore di 24 righe dopo
dataHour = dataHour[:-25]
In [6]:
# Split the data into training/testing sets
# ds: dataset
# regressor: regressore
# split_random: se True, split randomico, altrimenti split temporale sequenziale
# valid_portion: percentuale di dati da usare per il validation set
# yColumn: indice della colonna da usare come target (11 per ora successiva, 12 per 4 ore dopo, 13 per 12 ore dopo, 14 per 24 ore dopo)
def trainAndShowChart(ds, regressor, split_random, valid_portion, yColumn, title):
    data_x = ds.values[:,[1,2,3,4,5,6,7,8,9,10]]
    data_y = ds.values[:,yColumn]

    data_x = data_x.astype(np.float32)
    data_y = data_y.astype(np.float32)
    if split_random:
        # Split random to train, test set
        train_x, validation_x, train_y, validation_y = train_test_split(data_x, data_y, test_size=valid_portion, random_state=1)
    else:
        train_size = round((1-valid_portion) * len(data_x))
        train_x = data_x[0:train_size,:]
        validation_x = data_x[train_size:-1,:]
        train_y = data_y[0:train_size]
        validation_y = data_y[train_size:-1]
    regressor.fit(train_x, train_y)

    # Ottenimento delle predizioni
    train_y_predicted = regressor.predict(train_x)

    # Calcolo del RMSE
    rmse = np.sqrt(mean_squared_error(train_y, train_y_predicted))

    # Ottenimento delle predizioni (validation) e calcolo RMSE
    validation_y_predicted = regressor.predict(validation_x)
    vrmse = np.sqrt(mean_squared_error(validation_y, validation_y_predicted))


    #Mostra la tabella con i risultati
    data = {'Metrica': ['Train RMSE', 'Validation RMSE', 'R2 score'],
            'Valore': [rmse, vrmse, regressor.score(validation_x, validation_y)]}

    tabella = pd.DataFrame(data)

    fig, ax = plt.subplots(figsize=(5, 1))
    ax.axis('off')
    ax.axis('tight')
    table = ax.table(cellText=tabella.values, colLabels=tabella.columns, loc='center')
    table.set_fontsize(20)
    table.auto_set_column_width(False)

    #modifica dell'altezza delle righe
    cellDict = table.get_celld()
    cellDict[(0, 0)].set_height(.5)
    cellDict[(0, 1)].set_height(.5)

    for i in range(len(data['Metrica'])):
        cellDict[(i+1, 0)].set_height(.5)
        cellDict[(i+1, 1)].set_height(.5)

    plt.show()

    # Calcola l'errore come scostamento delle predizioni dal valore reale
    errors = np.abs(validation_y - validation_y_predicted)
    #Disegna il grafico con l'andamento dei dati reali e predetti e il grafico della distribuzione degli errori
    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(25, 4))
    fig.suptitle(title)

    ax1.set_title("Distribuzione degli errori ")
    ax1.hist(x = errors)

    # Visualizza l'andamento reale e quello predetto
    ax2.set_title("Confronto tra valori reali e prediction")

    ax2.plot(ds['Date'][round((1-valid_portion) * len(data_x)):-1], validation_y[0:round((valid_portion) * len(data_x))], label='Real')
    ax2.plot(ds['Date'][round((1-valid_portion) * len(data_x)):-1], validation_y_predicted[0:round((valid_portion) * len(data_x))], label='Prediction')
    ax2.legend(bbox_to_anchor=(1.02, 1), loc='upper left', borderaxespad=0.)

Verifichiamo grazie alla funzione precedente, se puo convenire utilizzare un modello di regressione lineare o uno random forest

In [7]:
trainAndShowChart(dataHour, LinearRegression(), False, 0.8, 11, "1 ora dopo- Linear Regression")
trainAndShowChart(dataHour, RandomForestRegressor(), False, 0.8, 11, "1 ora dopo- Random Forest")

Si nota che il random forrest non predice bene i valori dato che non puo prevedere valori sui quali non si é mai allenato, in questo caso non si é mai allenato con valori inferiori a 1.2 e come si vede dal grafico in quel caso predice valori molto alti rispetto a quelli reali. Per questo motivo si decide di utilizzare la regressione lineare.

Previsione dei dati

Ora vediamo quanto riusciamo a prevedere il futuro utilizzando un modello di regressione lineare

In [8]:
linear_regH1 = LinearRegression()
linear_regH4 = LinearRegression()
linear_regH12 = LinearRegression()
linear_regH24 = LinearRegression()
trainAndShowChart(dataHour, linear_regH1, False, 0.8, 11, "1 ora dopo")
trainAndShowChart(dataHour, linear_regH4, False, 0.8, 12, "4 ore dopo")
trainAndShowChart(dataHour, linear_regH12, False, 0.8, 13, "12 ore dopo")
trainAndShowChart(dataHour, linear_regH24, False, 0.8, 14, "24 ore dopo")

Analisi dei risultati

Anche la predizione delle 24 ore dopo del nostro modello é molto precisa. Proviamo allora a aumentare il range e vedere se riesce a predirre il prezzo 1 settimana dopo e 1 mese dopo

In [9]:
# colonna NextWeekBC contenente i valore BC della settimana dopo
dataHour['NextWeekBC'] = dataHour['BC'].shift(-24*7)
# colonna NextMonthBC contenente i valore BC del mese dopo
dataHour['NextMonthBC'] = dataHour['BC'].shift(-24*30)
dataHour = dataHour[:-24*30]

linear_reg = LinearRegression()
trainAndShowChart(dataHour, linear_reg, False, 0.8, 15, "1 settimana dopo, 20% training")
trainAndShowChart(dataHour, linear_reg, False, 0.8, 16, "1 mese dopo, 20% training")

Analisi dei risultati

A questo punto vediamo che il valore predetto é diverso da quello reale, ma nonostante questo il valore di R2 e molto alto quindi sembra molto preciso. Pensiamo però che questo si dovuto alle fasi "stabili" del valore, in cui il valore rimane costante per un lungo periodo di tempo. e quindi queste fanno aumentare il valore di R2. Per verificarlo proviamo ad aumentare la percentuale di dati utilizzati per il training dal 20% all'80% e vedere come cambia il valore di R2.

In [10]:
trainAndShowChart(dataHour, linear_reg, False, 0.1, 15, "1 settimana dopo, 90% training")
trainAndShowChart(dataHour, linear_reg, False, 0.1, 16, "1 mese dopo, 90% training")

Sembra proprio che i valori di R2 siano molto alti, a causa delle fasi stabili del valore, ma che in realtà non predice cosi bene il valore a lungo termine. Lo predice bene solo nel caso che il cambiamento di prezzo sia basso, e questo aumenta molto il valore di R2, non reandendolo, per^o, realistico. Decidiamo di utilizzare solo i dati previsti di 1, 4, 12 e 24 ore dopo, dato che sono i piu realistici.

Analisi delle features

Vediamo nei casi di 1, 4, 12, e 24 ore dopo che feature sono piu importanti per la predizione

In [11]:
# plot feature importance
plt.bar([x for x in range(len(linear_regH1.coef_))], linear_regH1.coef_)
plt.show()
plt.bar([x for x in range(len(linear_regH4.coef_))], linear_regH4.coef_)
plt.show()
plt.bar([x for x in range(len(linear_regH12.coef_))], linear_regH12.coef_)
plt.show()
plt.bar([x for x in range(len(linear_regH24.coef_))], linear_regH24.coef_)
plt.show()

Si nota che le features più importanti sono sempre le stesse, quello che cambia è la scala dell'importanza

Classificazione dataset orario

Proviamo ora a fare una classificazione per vedere se il prezzo aumenterà o diminuirà

In [12]:
# creare la colonna increment che indica il valore di incremento o decremento rispetto al valore precedente
dataHour['increment'] = dataHour['BO'].diff()
dataHour['increment'] = dataHour['increment'].shift(-1)
dataHour['increment'] = dataHour['increment'].apply(lambda x: 1 if x > 0 else 0)
# increment01 vale 1 se il prezzo e 0.001 in più rispetto a quello precedente
dataHour['increment001'] = dataHour['BO'].diff()
dataHour['increment001'] = dataHour['increment001'].shift(-1)
dataHour['increment001'] = dataHour['increment001'].apply(lambda x: 1 if x > 0.001 else 0)
# increment01 vale 1 se il prezzo e 0.01 in più rispetto a quello precedente
dataHour['increment01'] = dataHour['BO'].diff()
dataHour['increment01'] = dataHour['increment01'].shift(-1)
dataHour['increment01'] = dataHour['increment01'].apply(lambda x: 1 if x > 0.01 else 0)

divisione dei dati in training e test

In [13]:
def calculateBestNeighborAndShowGraph(ds, columnToClassify, title):
    data_x = ds.iloc[:, [1,2,3,4,5,6,7,8,9,10]].values
    data_y = ds.iloc[:, columnToClassify].values

    data_x = data_x[:-1,:] # Rimuove l'ultima riga
    data_y = data_y[:-1] # Rimuove l'ultima riga
    # split dei dati in training e test
    X_train, X_test, y_train, y_test = train_test_split(data_x, data_y, test_size=0.80)
    #
    scaler = StandardScaler()
    scaler.fit(X_train)
    # applica la normalizzazione ai dati
    X_train = scaler.transform(X_train)
    X_test = scaler.transform(X_test)

    error = []
    # Calculating error for K values between 1 and 20
    for i in range(1, 20):
        knn = KNeighborsClassifier(n_neighbors=i)
        knn.fit(X_train, y_train)
        pred_i = knn.predict(X_test)
        error.append(np.mean(pred_i != y_test))
    plt.figure(figsize=(14, 6))
    plt.plot(range(1, 20), error, color='red', linestyle='dashed', marker='o',
             markerfacecolor='blue', markersize=10, zorder=1)

    min_x = np.argmin(error) + 1
    min_y = np.min(error)
    plt.scatter(min_x, min_y, color='green', label='minimum', zorder=2, s=100)
    plt.legend()
    plt.title('Error Rate K Value '+ title)
    plt.xlabel('K Value')
    plt.ylabel('Mean Error')
    return X_train, X_test, y_train, y_test

X_train, X_test, y_train, y_test = calculateBestNeighborAndShowGraph(dataHour, 17, "increment/decrement")
In [14]:
# creazione del classificatore
classifier = KNeighborsClassifier(n_neighbors=13)
classifier.fit(X_train, y_train)
# predizione dei dati di test
y_pred = classifier.predict(X_test)

print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
[[36211   797]
 [  953 35910]]
              precision    recall  f1-score   support

           0       0.97      0.98      0.98     37008
           1       0.98      0.97      0.98     36863

    accuracy                           0.98     73871
   macro avg       0.98      0.98      0.98     73871
weighted avg       0.98      0.98      0.98     73871

Analisi della classificazione

Da questo risultato si vede che il classificatore predice bene la categoria, difatti a abbiamo una precisione del 98% Proviamo ora a vedere come prevede il prezzo con un cambiamento di almeno 0.001

In [15]:
X_train, X_test, y_train, y_test = calculateBestNeighborAndShowGraph(dataHour, 18, "increment/decrement")
In [16]:
# creazione del classificatore
classifier = KNeighborsClassifier(n_neighbors=15)
classifier.fit(X_train, y_train)
# predizione dei dati di test
y_pred = classifier.predict(X_test)

print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
[[61668   308]
 [  437 11458]]
              precision    recall  f1-score   support

           0       0.99      1.00      0.99     61976
           1       0.97      0.96      0.97     11895

    accuracy                           0.99     73871
   macro avg       0.98      0.98      0.98     73871
weighted avg       0.99      0.99      0.99     73871

Da questo risultato si vede che il classificatore predice bene la categoria, difatti a abbiamo una precisione del 98% in più di 11'000 casi.

In [17]:
X_train, X_test, y_train, y_test = calculateBestNeighborAndShowGraph(dataHour, 19, "increment/decrement")
In [18]:
# creazione del classificatore
classifier = KNeighborsClassifier(n_neighbors=5)
classifier.fit(X_train, y_train)
# predizione dei dati di test
y_pred = classifier.predict(X_test)

print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
[[73822     2]
 [   12    35]]
              precision    recall  f1-score   support

           0       1.00      1.00      1.00     73824
           1       0.95      0.74      0.83        47

    accuracy                           1.00     73871
   macro avg       0.97      0.87      0.92     73871
weighted avg       1.00      1.00      1.00     73871

Da questa classificazione knn con k = 3 notiamo che anche qui abbiamo una precisione molto alta ma questo caso, cioé che il prezzo é aumentato di 0.01 in un ora, e successo solo 47 di piu di 70'000 volte, quindi non é molto significativo.

Regressione lineare dataset minuti

Questa operazione può richiedere anche molto tempo visto che il dataset è molto grande

In [19]:
dataMinute['NextBC'] = dataMinute['BC'].shift(-1) # aggiunge alla riga corrente il valore della riga successiva
dataMinute['Next15BC'] = dataMinute['BC'].shift(-15) # aggiunge alla riga corrente il valore di 4 righe dopo
dataMinute['Next30BC'] = dataMinute['BC'].shift(-30) # aggiunge alla riga corrente il valore di 12 righe dopo
dataMinute = dataMinute[:-31]

linear_reg15M = LinearRegression()
linear_reg30M = LinearRegression()
trainAndShowChart(dataMinute, linear_reg15M, False, 0.8, 11, "15 minuti dopo")
trainAndShowChart(dataMinute, linear_reg30M, False, 0.8, 12, "30 minuti dopo")
In [20]:
# creare la colonna increment che indica il valore di incremento o decremento rispetto al valore precedente
dataMinute['increment'] = dataMinute['BO'].diff()
dataMinute['increment'] = dataMinute['increment'].shift(-1)
dataMinute['increment'] = dataMinute['increment'].apply(lambda x: 1 if x > 0 else 0)
In [21]:
X_train, X_test, y_train, y_test = calculateBestNeighborAndShowGraph(dataMinute, 14, "increment/decrement")
In [22]:
# creazione del classificatore
classifier = KNeighborsClassifier(n_neighbors=9)
classifier.fit(X_train, y_train)
# predizione dei dati di test
y_pred = classifier.predict(X_test)

print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
[[2241675  204445]
 [ 249834 1799076]]
              precision    recall  f1-score   support

           0       0.90      0.92      0.91   2446120
           1       0.90      0.88      0.89   2048910

    accuracy                           0.90   4495030
   macro avg       0.90      0.90      0.90   4495030
weighted avg       0.90      0.90      0.90   4495030

Analisi delle news

In questa sezione si analizzeranno le news che si hanno e si classificheranno in base all'andamento giornaliero della richiesta

In [23]:
dataMinute = pd.read_csv('./eurusd_minute.csv')
dataHour = pd.read_csv('./eurusd_hour.csv')
dataNews = pd.read_csv('./eurusd_news.csv')
dataMinute.head()
Out[23]:
Date Time BO BH BL BC BCh AO AH AL AC ACh
0 2005-01-02 18:29 1.3555 1.3555 1.3555 1.3555 0.0 1.3565 1.3565 1.3565 1.3565 0.0
1 2005-01-02 18:38 1.3555 1.3555 1.3555 1.3555 0.0 1.3565 1.3565 1.3565 1.3565 0.0
2 2005-01-02 18:51 1.3562 1.3562 1.3562 1.3562 0.0 1.3572 1.3572 1.3572 1.3572 0.0
3 2005-01-02 18:52 1.3560 1.3560 1.3560 1.3560 0.0 1.3570 1.3570 1.3570 1.3570 0.0
4 2005-01-02 18:55 1.3563 1.3563 1.3563 1.3563 0.0 1.3573 1.3573 1.3573 1.3573 0.0

Analisi dati del dataset

I dati principali e più interessanti da vedere sono le colonne "Date", "Time", "BO", "BC", "AO" e "AC"\n Le colonne "Date" e "Time" mostrano la data e l'orario di quel determinato cambio\n "BO" e "BC" rappresentano il prezzo di offerta del cambio\n "AO" e "AC" rappresentano il prezzo di domanda del cambio

In [24]:
# Creo una colonna chiamata Year che contiene l'anno #
dataMinute['Date'] = pd.to_datetime(dataMinute['Date'])
dataMinute['Year'] = dataMinute['Date'].dt.year

dataMinute['Date'] = dataMinute['Date'].astype(str)
dataMinute['Time'] = dataMinute['Time'].astype(str)
dataMinute['Date'] = dataMinute['Date'] + ' ' + dataMinute['Time']
dataMinute['Date'] = pd.to_datetime(dataMinute['Date'], format='%Y-%m-%d %H:%M')

dataMinute.head()
Out[24]:
Date Time BO BH BL BC BCh AO AH AL AC ACh Year
0 2005-01-02 18:29:00 18:29 1.3555 1.3555 1.3555 1.3555 0.0 1.3565 1.3565 1.3565 1.3565 0.0 2005
1 2005-01-02 18:38:00 18:38 1.3555 1.3555 1.3555 1.3555 0.0 1.3565 1.3565 1.3565 1.3565 0.0 2005
2 2005-01-02 18:51:00 18:51 1.3562 1.3562 1.3562 1.3562 0.0 1.3572 1.3572 1.3572 1.3572 0.0 2005
3 2005-01-02 18:52:00 18:52 1.3560 1.3560 1.3560 1.3560 0.0 1.3570 1.3570 1.3570 1.3570 0.0 2005
4 2005-01-02 18:55:00 18:55 1.3563 1.3563 1.3563 1.3563 0.0 1.3573 1.3573 1.3573 1.3573 0.0 2005

In questo dataset modificato è stata creata la colonna "Year" che contiene l'anno estrapolato dalla data\n mentre alla colonna "Date" è stata unita la colonna "Time"

In [25]:
# Prendo solo l'anno 2010
dataMinute2010 = dataMinute[dataMinute['Year'] == 2010]
dataToUse=pd.DataFrame()
dataToUse['Date'], dataToUse['BC'] = dataMinute2010['Date'], dataMinute2010['BC']

dataToUse.head()
Out[25]:
Date BC
1864525 2010-01-03 17:53:00 1.43045
1864526 2010-01-03 17:56:00 1.43097
1864527 2010-01-03 18:02:00 1.43225
1864528 2010-01-03 18:03:00 1.43178
1864529 2010-01-03 18:04:00 1.43228
In [26]:
plt.scatter(dataToUse['Date'],dataToUse['BC'], color='black')
plt.show()

Analisi del grafico

Il grafico mostra l'andamento del cambio EUR/USD nel 2010, si può notare che il cambio è molto volatile e che ci sono dei picchi molto alti e bassi

Creazione dataset per la classificazione

Per la classificazione si va a creare un dataset con tutte le news ordinate per data e la differenza, per le date delle news, tra il cambio dell'offerta di chiusura di quella giornata e quella d'apertura

In [27]:
date = dataNews['Date'].unique()
tmp = dataHour[dataHour['Date'].isin(date)]
BOtmp = tmp.groupby('Date')['BO'].apply(list).to_dict()
BCtmp = tmp.groupby('Date')['BC'].apply(list).to_dict()

ttmp = {k: v[0] for k, v in BOtmp.items()}

for k,v in ttmp.items():
    ttmp[k] = v - BCtmp[k][-1]

dfDayly = pd.DataFrame.from_dict(ttmp, orient='index')
dfDayly = dfDayly.reset_index()
dfDayly.columns = ['Date', 'BO-BC']
print(dfDayly.head(),"\n",dfDayly.shape)
         Date    BO-BC
0  2018-01-02 -0.00512
1  2018-01-03  0.00510
2  2018-01-04 -0.00559
3  2018-01-05  0.00390
4  2018-01-07 -0.00115 
 (312, 2)

Analisi del nuovo dataset

È possibile notare che, mettendo assieme tutte le news uscite nella stessa giornata, si avranno solamente 312 righe per analizzare il testo. Questo porta ad avere un trainset con pochi dati, quindi con una bassa percentuale di risoluzione

In [28]:
dataNews = dataNews[['Date', 'Title', 'Article']]
dataNews['News'] = dataNews['Title'] + ' ' + dataNews['Article']
dataNews = dataNews[['Date', 'News']]
dataNews['News'] = dataNews['News'].str.replace('\n', ' ')
dataNews['News'] = dataNews['News'].str.replace('\r', ' ')
dataNews['News'] = dataNews['News'].str.replace('\t', ' ')
dataNews['News'] = dataNews['News'].str.replace('.', ' ')
dataNews['News'] = dataNews['News'].str.replace(',', ' ')
dataNews['News'] = dataNews['News'].str.replace('\'s', ' ')
dataNews['News'] = dataNews['News'].str.replace(')', ' ')
dataNews['News'] = dataNews['News'].str.replace('(', ' ')

dataNews['News'] = dataNews['News'].apply(lambda x: ' '.join([w for w in x.split() if len(w)>3]))
C:\Users\alexa\AppData\Local\Temp\ipykernel_10992\201436317.py:7: FutureWarning:

The default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True.

C:\Users\alexa\AppData\Local\Temp\ipykernel_10992\201436317.py:10: FutureWarning:

The default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True.

C:\Users\alexa\AppData\Local\Temp\ipykernel_10992\201436317.py:11: FutureWarning:

The default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True.

In [29]:
dataNewsToUse = dataNews.groupby('Date')['News'].apply(list).to_dict()

for k,v in dataNewsToUse.items():
    dataNewsToUse[k] = ' '.join(v)

dataNewsToUse = pd.DataFrame.from_dict(dataNewsToUse, orient='index')
dataNewsToUse = dataNewsToUse.reset_index()
dataNewsToUse.columns = ['Date', 'News']
dataNewsToUse
Out[29]:
Date News
0 2018-01-01 Forex Aussie Gains Asia After Caixin Manufactu...
1 2018-01-02 Forex Dollar Weakness Continues Into 2018 Fall...
2 2018-01-03 Dollar Snap 10-Day Losing Streak After Strong ...
3 2018-01-04 Forex Upbeat Economic Data Fails Rescue Dollar...
4 2018-01-05 Forex- Dollar Rises Despite Fall Jobs Service ...
... ... ...
314 2019-01-14 Forex Dollar Flat Rebounds Reuters Investing g...
315 2019-01-15 Forex Dollar Rises After Weak German Data Reut...
316 2019-01-16 Forex Sterling Rebounds "Diminished" Brexit Ri...
317 2019-01-17 Forex Dollar Remains Steady Jobless Claims 5-W...
318 2019-01-18 Forex Dollar Rises Consumer Optimism Falls Reu...

319 rows × 2 columns

In [30]:
dfDayly = dfDayly.merge(dataNewsToUse, on='Date')
dfDayly['BO-BC'] = np.where(dfDayly['BO-BC'] > 0, 1, 0)
dfDayly
Out[30]:
Date BO-BC News
0 2018-01-02 0 Forex Dollar Weakness Continues Into 2018 Fall...
1 2018-01-03 1 Dollar Snap 10-Day Losing Streak After Strong ...
2 2018-01-04 0 Forex Upbeat Economic Data Fails Rescue Dollar...
3 2018-01-05 1 Forex- Dollar Rises Despite Fall Jobs Service ...
4 2018-01-07 0 Forex Dollar Edges Against Asia Light Data Reu...
... ... ... ...
307 2019-01-14 0 Forex Dollar Flat Rebounds Reuters Investing g...
308 2019-01-15 1 Forex Dollar Rises After Weak German Data Reut...
309 2019-01-16 1 Forex Sterling Rebounds "Diminished" Brexit Ri...
310 2019-01-17 1 Forex Dollar Remains Steady Jobless Claims 5-W...
311 2019-01-18 1 Forex Dollar Rises Consumer Optimism Falls Reu...

312 rows × 3 columns

Analisi del nuovo dataset per categorizzare le frasi

Il dataset è stato categorizzato in due classi, 1 per le news che hanno un andamento giornaliero di valore positivo e 0 per quelle che hanno un valore negativo. Questa scelta è stata fatta per semplificare la categorizzazione. Son state inoltre rimosse, in questo caso a mano, tutte le parole più corte di 4 lettere e tutti i caratteri speciali.

Classificazione delle frasi

Son stati creati dei metodi per la classificazione delle frasi, questi metodi sono stati creati per poter essere utilizzati in modo generico, quindi per poter essere utilizzati con qualsiasi dataset e ritornano la loro accuratezza.

Classificazione Naive Bayes

Come prima classificazione delle frasi si è scelto di utilizzare il classificatore Naive Bayes, questo perchè è un classificatore molto semplice e veloce da addestrare. Inoltre, essendo un classificatore probabilistico, è possibile ottenere la probabilità che una frase appartenga ad una classe piuttosto che ad un'altra.

In [31]:
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score

def NaiveBayes():
    X_train, X_test, y_train, y_test = train_test_split(dfDayly['News'], dfDayly['BO-BC'], test_size=0.2, random_state=52)

    # crea la matrice di features con CountVectorizer
    vectorizer = CountVectorizer()
    X_train_vect = vectorizer.fit_transform(X_train)
    X_test_vect = vectorizer.transform(X_test)

    # addestra il modello di classificazione
    clf = MultinomialNB()
    clf.fit(X_train_vect, y_train)

    # effettua le previsioni sul test set
    y_pred = clf.predict(X_test_vect)

    # calcola l'accuratezza del modello
    accuracy = accuracy_score(y_test, y_pred)
    #print('Accuratezza del modello: {:.2f}%'.format(accuracy*100))
    return accuracy

Classificazione Logistica

La seconda classificazione delle frasi è stata fatta utilizzando la regressione logistica. Questo perchè è un classificatore molto utilizzato e che ha un'accuratezza molto alta.

In [32]:
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score

def Logistic():

    # Dividi il dataset in training set e test set
    X_train, X_test, y_train, y_test = train_test_split(dfDayly['News'], dfDayly['BO-BC'], test_size=0.2, random_state=42)

    # Crea la matrice di features utilizzando CountVectorizer
    vectorizer = CountVectorizer()
    X_train_vect = vectorizer.fit_transform(X_train)
    X_test_vect = vectorizer.transform(X_test)

    # Crea un'istanza del modello di regressione logistica e addestralo
    lr = LogisticRegression()
    lr.fit(X_train_vect, y_train)

    # Effettua le previsioni sul test set utilizzando il modello addestrato
    y_pred = lr.predict(X_test_vect)

    # Valuta l'accuratezza del modello utilizzando la metrica accuracy_score
    accuracy = accuracy_score(y_test, y_pred)
    #print('Accuratezza del modello: {:.2f}%'.format(accuracy*100))
    return accuracy

Classificazione Random Forest

La terza classificazione delle frasi è stata fatta utilizzando il classificatore Random Forest. Questo perchè è un classificatore molto utilizzato e che ha un'accuratezza molto alta grayie al fattore che utilizza la randomizzazione delle scelte.

In [33]:
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score

def RandomForest():
    print("Starto Random Forest")
    # Dividi il dataset in training set e test set
    X_train, X_test, y_train, y_test = train_test_split(dfDayly['News'], dfDayly['BO-BC'], test_size=0.2)

    # Crea la matrice di features utilizzando CountVectorizer
    vectorizer = CountVectorizer()
    X_train_vect = vectorizer.fit_transform(X_train)
    X_test_vect = vectorizer.transform(X_test)

    # Crea un'istanza del modello di Random Forest e addestralo
    rf = RandomForestClassifier()
    rf.fit(X_train_vect, y_train)

    # Effettua le previsioni sul test set utilizzando il modello addestrato
    y_pred = rf.predict(X_test_vect)

    # Valuta l'accuratezza del modello utilizzando la metrica accuracy_score
    accuracy = accuracy_score(y_test, y_pred)
    #print('Accuratezza del modello: {:.2f}%'.format(accuracy*100))
    print(classification_report(y_test, y_pred))
    print("------------------------------------------")
    return accuracy

Classificazione Random Forest con TF-IDF

La quarta classificazione delle frasi è stata fatta utilizzando il classificatore Random Forest con la matrice TF-IDF. Questo perchè è un classificatore molto utilizzato e che ha un'accuratezza molto alta grazie al fattore che l'aggiunta del TF-IDF calcola la frequenza delle varie parole e le pesa. Prova vari parametri e, grazie ad una ricerca bruta, trova e restituisce il risultato migliore.

In [34]:
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline

def TFIDF():
    print("TFIDF")
    X_train, X_test, y_train, y_test = train_test_split(dfDayly['News'], dfDayly['BO-BC'], test_size=0.2, random_state=42)

    # definisci la pipeline per la creazione della matrice TF-IDF e la creazione del modello
    pipeline = Pipeline([
        ('tfidf', TfidfVectorizer()),
        ('clf', RandomForestClassifier())
    ])

    # definisci la griglia dei parametri da testare
    param_grid = {
        'tfidf__max_df': [0.25, 0.5, 0.75],
        'clf__n_estimators': [19 ,20, 21],
        'clf__max_depth': [6, 8, 10],
        'clf__min_samples_split': [0.25,0.5, 0.75],
        'clf__min_samples_leaf': [4, 5, 7]
    }

    # crea un oggetto di tipo GridSearchCV
    grid_search = GridSearchCV(estimator=pipeline, param_grid=param_grid, cv=5)

    # addestra il modello utilizzando la griglia dei parametri
    grid_search.fit(X_train, y_train)

    # stampa i parametri migliori e il punteggio
    print("Best parameters: ", grid_search.best_params_)
    print("Best score: ", grid_search.best_score_)
    print("Best estimator: ", grid_search.best_estimator_)
    print("Best test score: ", grid_search.best_estimator_.score(X_test, y_test))

    y_pred = grid_search.predict(X_test)
    accuracy = accuracy_score(y_test, y_pred)
    #print('Accuratezza del modello: {:.2f}%'.format(accuracy*100))
    return accuracy

Classificazione KNN

La quinta classificazione delle frasi è stata fatta utilizzando il classificatore KNN. Questo metodo utilizza un numero specifico per il numero di vicni utilizzando il metodo del coseno, in questo caso. Viene creato un for loop modificando il numero dei vicini da 1 a 20 e ritorna quello l'accuratezza più alta

In [35]:
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report


def KNN():
    print("KNN")
    best_neighbor = 1
    accuracy_best_neighbor = 0
    report = ""

    for i in range(1,20):

        # crea il modello bag-of-words
        vectorizer = CountVectorizer()
        text_vectors = vectorizer.fit_transform(dfDayly["News"])

        # suddividi il dataset in train e test set
        X_train, X_test, y_train, y_test = train_test_split(text_vectors, dfDayly["BO-BC"], test_size=0.1)

        # crea il modello KNN
        knn = KNeighborsClassifier(n_neighbors=i, metric='cosine')

        # addestra il modello
        knn.fit(X_train, y_train)

        # valuta il modello
        y_pred = knn.predict(X_test)
        accuracy = accuracy_score(y_test, y_pred)

        if accuracy > accuracy_best_neighbor:
            accuracy_best_neighbor = accuracy
            best_neighbor = i
            print(f"New best neighbor: {best_neighbor}\n")
            print(classification_report(y_test, y_pred))
            print("\n--------------------------------------------------\n")

        #print(f"Accuracy: {accuracy}")


    return accuracy_best_neighbor

Classificazione Gradient Boosting

La sesta classificazione delle frasi è stata fatta utilizzando il classificatore Gradient Boosting. È un algoritmo di apprendimento automatico che combina più modelli di albero decisionale in una struttura ad albero sequenziale, in cui ogni modello cerca di correggere gli errori del modello precedente. È una tecnica di ensemble learning che viene utilizzata per la classificazione e la regressione.

In [36]:
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.model_selection import train_test_split

def GradientBoosting():
    print("Gradient Boosting")
    # Caricamento del dataset e suddivisione in set di training e test
    X = dfDayly['News']
    y = dfDayly['BO-BC']
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=52)

    # Creazione del vettore di feature basato sul TF-IDF
    vectorizer = TfidfVectorizer()
    X_train_vec = vectorizer.fit_transform(X_train)
    X_test_vec = vectorizer.transform(X_test)

    # Creazione del modello di classificazione Gradient Boosting
    gb = GradientBoostingClassifier()

    # Addestramento del modello
    gb.fit(X_train_vec, y_train)

    # Predizione sul set di test
    y_pred = gb.predict(X_test_vec)

    # Valutazione delle performance del modello
    accuracy = accuracy_score(y_test, y_pred)
    confusion = confusion_matrix(y_test, y_pred)

    #print(f"Accuracy: {accuracy}")
    print(f"Gradient Boosting - Confusion matrix:\n{confusion}")
    return accuracy

Classificazione con Reti Neurali con metodo CNN

La settima classificazione delle frasi è stata fatta utilizzando il classificatore CNN. L'addestramento di una CNN avviene tramite la retropropagazione dell'errore, in cui si calcola la differenza tra le previsioni del modello e le etichette corrette e si aggiornano i pesi dei filtri per minimizzare l'errore. Questo processo di ottimizzazione viene solitamente eseguito utilizzando algoritmi come la discesa del gradiente. Viene solitamente usato per le immagini. Inoltre stampa un grafico delle varie proprietà.

In [37]:
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Conv1D, GlobalMaxPooling1D, Dense

def CNN():
    print("START CNN")
    # Lettura dei dati
    texts = dfDayly["News"].values
    labels = dfDayly["BO-BC"].values

    # Preprocessing dei dati
    maxlen = 1000
    max_words = 100000
    tokenizer = Tokenizer(num_words=max_words)
    tokenizer.fit_on_texts(texts)
    sequences = tokenizer.texts_to_sequences(texts)
    word_index = tokenizer.word_index
    print("Found %s unique tokens." % len(word_index))
    data = pad_sequences(sequences, maxlen=maxlen)

    # Divisione in training e validation set
    X_train, X_val, y_train, y_val = train_test_split(data, labels, test_size=0.2, random_state=42)

    # Creazione del modello CNN
    model = Sequential()
    model.add(Embedding(max_words, 128, input_length=maxlen))
    model.add(Conv1D(32, 7, activation='relu'))
    model.add(GlobalMaxPooling1D())
    model.add(Dense(1, activation='sigmoid'))
    model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
    model.summary()

    # Addestramento del modello
    history = model.fit(X_train, y_train, epochs=300, batch_size=128, validation_data=(X_val, y_val))

    # Valutazione del modello
    loss, accuracy = model.evaluate(X_val, y_val)

    # Stampa dei risultati
    print("CNN: History results")
    print(history.history.keys())
    print(history)
    print(history.history['accuracy'])
    print(history.history['val_accuracy'])
    print(history.history['loss'])
    print(history.history['val_loss'])
    print("------------------")

    #print("Accuracy on validation set: %.2f" % (accuracy*100))
    print(f"Loss: {loss}")

    epochs = range(1, len(history.history['accuracy']) + 1)
    plt.plot(epochs, history.history['accuracy'], 'b', label='Training Accuracy')
    plt.plot(epochs, history.history['val_accuracy'], 'r', label='Validation Accuracy')
    plt.plot(epochs, history.history['loss'], 'g', label='Training Loss')
    plt.plot(epochs, history.history['val_loss'], 'm', label='Validation Loss')
    plt.title('CNN: Training and Validation Metrics')
    plt.xlabel('Epochs')
    plt.ylabel('Metrics')
    plt.legend()
    plt.show()
    return accuracy

Confronto tra i classificatori e le loro accuratezze

In [38]:
print(f"Accuracy of Naive Bayes : {NaiveBayes()}\n"
      f"Accuracy of Logistic: {Logistic()}\n"
      f"Accuracy of Random Forest: {RandomForest()}\n"
      f"Accuracy of KNN: {KNN()}\n"
      f"Accuracy of TF-IDF: {TFIDF()}\n"
      f"Accuracy of Gradient Boosting: {GradientBoosting()}\n"
      f"Accuracy of CNN: {CNN()}\n")
C:\Users\alexa\miniconda3\envs\ML\lib\site-packages\sklearn\linear_model\_logistic.py:458: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Starto Random Forest
              precision    recall  f1-score   support

           0       0.53      0.55      0.54        31
           1       0.55      0.53      0.54        32

    accuracy                           0.54        63
   macro avg       0.54      0.54      0.54        63
weighted avg       0.54      0.54      0.54        63

------------------------------------------
KNN
New best neighbor: 1

              precision    recall  f1-score   support

           0       0.47      0.60      0.53        15
           1       0.54      0.41      0.47        17

    accuracy                           0.50        32
   macro avg       0.51      0.51      0.50        32
weighted avg       0.51      0.50      0.50        32


--------------------------------------------------

New best neighbor: 2

              precision    recall  f1-score   support

           0       0.59      0.76      0.67        17
           1       0.60      0.40      0.48        15

    accuracy                           0.59        32
   macro avg       0.60      0.58      0.57        32
weighted avg       0.60      0.59      0.58        32


--------------------------------------------------

New best neighbor: 11

              precision    recall  f1-score   support

           0       0.64      0.47      0.54        15
           1       0.62      0.76      0.68        17

    accuracy                           0.62        32
   macro avg       0.63      0.62      0.61        32
weighted avg       0.63      0.62      0.62        32


--------------------------------------------------

New best neighbor: 18

              precision    recall  f1-score   support

           0       0.71      0.67      0.69        18
           1       0.60      0.64      0.62        14

    accuracy                           0.66        32
   macro avg       0.65      0.65      0.65        32
weighted avg       0.66      0.66      0.66        32


--------------------------------------------------

TFIDF
Best parameters:  {'clf__max_depth': 6, 'clf__min_samples_leaf': 5, 'clf__min_samples_split': 0.25, 'clf__n_estimators': 19, 'tfidf__max_df': 0.5}
Best score:  0.634530612244898
Best estimator:  Pipeline(steps=[('tfidf', TfidfVectorizer(max_df=0.5)),
                ('clf',
                 RandomForestClassifier(max_depth=6, min_samples_leaf=5,
                                        min_samples_split=0.25,
                                        n_estimators=19))])
Best test score:  0.6190476190476191
Gradient Boosting
Gradient Boosting - Confusion matrix:
[[19 14]
 [13 17]]
START CNN
Found 11252 unique tokens.
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 embedding (Embedding)       (None, 1000, 128)         12800000  
                                                                 
 conv1d (Conv1D)             (None, 994, 32)           28704     
                                                                 
 global_max_pooling1d (Globa  (None, 32)               0         
 lMaxPooling1D)                                                  
                                                                 
 dense (Dense)               (None, 1)                 33        
                                                                 
=================================================================
Total params: 12,828,737
Trainable params: 12,828,737
Non-trainable params: 0
_________________________________________________________________
Epoch 1/300
2/2 [==============================] - 1s 347ms/step - loss: 0.6901 - accuracy: 0.5422 - val_loss: 0.7278 - val_accuracy: 0.3810
Epoch 2/300
2/2 [==============================] - 0s 124ms/step - loss: 0.6437 - accuracy: 0.5422 - val_loss: 0.7238 - val_accuracy: 0.3810
Epoch 3/300
2/2 [==============================] - 0s 117ms/step - loss: 0.6084 - accuracy: 0.6466 - val_loss: 0.7230 - val_accuracy: 0.3810
Epoch 4/300
2/2 [==============================] - 0s 95ms/step - loss: 0.5801 - accuracy: 0.8594 - val_loss: 0.7226 - val_accuracy: 0.3810
Epoch 5/300
2/2 [==============================] - 0s 110ms/step - loss: 0.5559 - accuracy: 0.9398 - val_loss: 0.7209 - val_accuracy: 0.3810
Epoch 6/300
2/2 [==============================] - 0s 108ms/step - loss: 0.5326 - accuracy: 0.9518 - val_loss: 0.7247 - val_accuracy: 0.3968
Epoch 7/300
2/2 [==============================] - 0s 97ms/step - loss: 0.5101 - accuracy: 0.9518 - val_loss: 0.7217 - val_accuracy: 0.4127
Epoch 8/300
2/2 [==============================] - 0s 110ms/step - loss: 0.4883 - accuracy: 0.9598 - val_loss: 0.7235 - val_accuracy: 0.3968
Epoch 9/300
2/2 [==============================] - 0s 96ms/step - loss: 0.4670 - accuracy: 0.9598 - val_loss: 0.7232 - val_accuracy: 0.4127
Epoch 10/300
2/2 [==============================] - 0s 103ms/step - loss: 0.4470 - accuracy: 0.9639 - val_loss: 0.7224 - val_accuracy: 0.4127
Epoch 11/300
2/2 [==============================] - 0s 93ms/step - loss: 0.4268 - accuracy: 0.9719 - val_loss: 0.7231 - val_accuracy: 0.4127
Epoch 12/300
2/2 [==============================] - 0s 103ms/step - loss: 0.4072 - accuracy: 0.9719 - val_loss: 0.7229 - val_accuracy: 0.4444
Epoch 13/300
2/2 [==============================] - 0s 114ms/step - loss: 0.3868 - accuracy: 0.9759 - val_loss: 0.7214 - val_accuracy: 0.4603
Epoch 14/300
2/2 [==============================] - 0s 111ms/step - loss: 0.3673 - accuracy: 0.9799 - val_loss: 0.7205 - val_accuracy: 0.4603
Epoch 15/300
2/2 [==============================] - 0s 92ms/step - loss: 0.3481 - accuracy: 0.9920 - val_loss: 0.7193 - val_accuracy: 0.4762
Epoch 16/300
2/2 [==============================] - 0s 108ms/step - loss: 0.3291 - accuracy: 1.0000 - val_loss: 0.7187 - val_accuracy: 0.4603
Epoch 17/300
2/2 [==============================] - 0s 113ms/step - loss: 0.3106 - accuracy: 1.0000 - val_loss: 0.7182 - val_accuracy: 0.4762
Epoch 18/300
2/2 [==============================] - 0s 109ms/step - loss: 0.2926 - accuracy: 1.0000 - val_loss: 0.7194 - val_accuracy: 0.4762
Epoch 19/300
2/2 [==============================] - 0s 96ms/step - loss: 0.2755 - accuracy: 1.0000 - val_loss: 0.7164 - val_accuracy: 0.4921
Epoch 20/300
2/2 [==============================] - 0s 105ms/step - loss: 0.2583 - accuracy: 1.0000 - val_loss: 0.7199 - val_accuracy: 0.4762
Epoch 21/300
2/2 [==============================] - 0s 93ms/step - loss: 0.2423 - accuracy: 1.0000 - val_loss: 0.7200 - val_accuracy: 0.4762
Epoch 22/300
2/2 [==============================] - 0s 108ms/step - loss: 0.2270 - accuracy: 1.0000 - val_loss: 0.7221 - val_accuracy: 0.4762
Epoch 23/300
2/2 [==============================] - 0s 95ms/step - loss: 0.2117 - accuracy: 1.0000 - val_loss: 0.7189 - val_accuracy: 0.5079
Epoch 24/300
2/2 [==============================] - 0s 112ms/step - loss: 0.1975 - accuracy: 1.0000 - val_loss: 0.7195 - val_accuracy: 0.4762
Epoch 25/300
2/2 [==============================] - 0s 147ms/step - loss: 0.1842 - accuracy: 1.0000 - val_loss: 0.7228 - val_accuracy: 0.4921
Epoch 26/300
2/2 [==============================] - 0s 128ms/step - loss: 0.1706 - accuracy: 1.0000 - val_loss: 0.7218 - val_accuracy: 0.4603
Epoch 27/300
2/2 [==============================] - 0s 91ms/step - loss: 0.1586 - accuracy: 1.0000 - val_loss: 0.7225 - val_accuracy: 0.4603
Epoch 28/300
2/2 [==============================] - 0s 110ms/step - loss: 0.1470 - accuracy: 1.0000 - val_loss: 0.7240 - val_accuracy: 0.4603
Epoch 29/300
2/2 [==============================] - 0s 100ms/step - loss: 0.1360 - accuracy: 1.0000 - val_loss: 0.7234 - val_accuracy: 0.4762
Epoch 30/300
2/2 [==============================] - 0s 103ms/step - loss: 0.1260 - accuracy: 1.0000 - val_loss: 0.7210 - val_accuracy: 0.4762
Epoch 31/300
2/2 [==============================] - 0s 106ms/step - loss: 0.1161 - accuracy: 1.0000 - val_loss: 0.7251 - val_accuracy: 0.4762
Epoch 32/300
2/2 [==============================] - 0s 108ms/step - loss: 0.1073 - accuracy: 1.0000 - val_loss: 0.7250 - val_accuracy: 0.4444
Epoch 33/300
2/2 [==============================] - 0s 90ms/step - loss: 0.0988 - accuracy: 1.0000 - val_loss: 0.7297 - val_accuracy: 0.4762
Epoch 34/300
2/2 [==============================] - 0s 105ms/step - loss: 0.0908 - accuracy: 1.0000 - val_loss: 0.7302 - val_accuracy: 0.4603
Epoch 35/300
2/2 [==============================] - 0s 111ms/step - loss: 0.0833 - accuracy: 1.0000 - val_loss: 0.7307 - val_accuracy: 0.4603
Epoch 36/300
2/2 [==============================] - 0s 113ms/step - loss: 0.0767 - accuracy: 1.0000 - val_loss: 0.7290 - val_accuracy: 0.4603
Epoch 37/300
2/2 [==============================] - 0s 94ms/step - loss: 0.0703 - accuracy: 1.0000 - val_loss: 0.7360 - val_accuracy: 0.4603
Epoch 38/300
2/2 [==============================] - 0s 104ms/step - loss: 0.0644 - accuracy: 1.0000 - val_loss: 0.7367 - val_accuracy: 0.4603
Epoch 39/300
2/2 [==============================] - 0s 94ms/step - loss: 0.0592 - accuracy: 1.0000 - val_loss: 0.7363 - val_accuracy: 0.4603
Epoch 40/300
2/2 [==============================] - 0s 112ms/step - loss: 0.0541 - accuracy: 1.0000 - val_loss: 0.7372 - val_accuracy: 0.4603
Epoch 41/300
2/2 [==============================] - 0s 95ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.7402 - val_accuracy: 0.4603
Epoch 42/300
2/2 [==============================] - 0s 103ms/step - loss: 0.0451 - accuracy: 1.0000 - val_loss: 0.7450 - val_accuracy: 0.4603
Epoch 43/300
2/2 [==============================] - 0s 105ms/step - loss: 0.0412 - accuracy: 1.0000 - val_loss: 0.7429 - val_accuracy: 0.4603
Epoch 44/300
2/2 [==============================] - 0s 98ms/step - loss: 0.0376 - accuracy: 1.0000 - val_loss: 0.7467 - val_accuracy: 0.4603
Epoch 45/300
2/2 [==============================] - 0s 113ms/step - loss: 0.0342 - accuracy: 1.0000 - val_loss: 0.7506 - val_accuracy: 0.4603
Epoch 46/300
2/2 [==============================] - 0s 97ms/step - loss: 0.0312 - accuracy: 1.0000 - val_loss: 0.7533 - val_accuracy: 0.4603
Epoch 47/300
2/2 [==============================] - 0s 95ms/step - loss: 0.0284 - accuracy: 1.0000 - val_loss: 0.7537 - val_accuracy: 0.4603
Epoch 48/300
2/2 [==============================] - 0s 93ms/step - loss: 0.0259 - accuracy: 1.0000 - val_loss: 0.7601 - val_accuracy: 0.4603
Epoch 49/300
2/2 [==============================] - 0s 112ms/step - loss: 0.0236 - accuracy: 1.0000 - val_loss: 0.7605 - val_accuracy: 0.4603
Epoch 50/300
2/2 [==============================] - 0s 103ms/step - loss: 0.0214 - accuracy: 1.0000 - val_loss: 0.7614 - val_accuracy: 0.4603
Epoch 51/300
2/2 [==============================] - 0s 104ms/step - loss: 0.0195 - accuracy: 1.0000 - val_loss: 0.7629 - val_accuracy: 0.4762
Epoch 52/300
2/2 [==============================] - 0s 119ms/step - loss: 0.0177 - accuracy: 1.0000 - val_loss: 0.7633 - val_accuracy: 0.4762
Epoch 53/300
2/2 [==============================] - 0s 101ms/step - loss: 0.0161 - accuracy: 1.0000 - val_loss: 0.7682 - val_accuracy: 0.4603
Epoch 54/300
2/2 [==============================] - 0s 102ms/step - loss: 0.0146 - accuracy: 1.0000 - val_loss: 0.7704 - val_accuracy: 0.4603
Epoch 55/300
2/2 [==============================] - 0s 106ms/step - loss: 0.0132 - accuracy: 1.0000 - val_loss: 0.7713 - val_accuracy: 0.4762
Epoch 56/300
2/2 [==============================] - 0s 114ms/step - loss: 0.0120 - accuracy: 1.0000 - val_loss: 0.7788 - val_accuracy: 0.4603
Epoch 57/300
2/2 [==============================] - 0s 95ms/step - loss: 0.0109 - accuracy: 1.0000 - val_loss: 0.7825 - val_accuracy: 0.4762
Epoch 58/300
2/2 [==============================] - 0s 110ms/step - loss: 0.0099 - accuracy: 1.0000 - val_loss: 0.7850 - val_accuracy: 0.4603
Epoch 59/300
2/2 [==============================] - 0s 103ms/step - loss: 0.0090 - accuracy: 1.0000 - val_loss: 0.7863 - val_accuracy: 0.4603
Epoch 60/300
2/2 [==============================] - 0s 104ms/step - loss: 0.0081 - accuracy: 1.0000 - val_loss: 0.7901 - val_accuracy: 0.4762
Epoch 61/300
2/2 [==============================] - 0s 100ms/step - loss: 0.0074 - accuracy: 1.0000 - val_loss: 0.7907 - val_accuracy: 0.4762
Epoch 62/300
2/2 [==============================] - 0s 102ms/step - loss: 0.0067 - accuracy: 1.0000 - val_loss: 0.7954 - val_accuracy: 0.4762
Epoch 63/300
2/2 [==============================] - 0s 92ms/step - loss: 0.0060 - accuracy: 1.0000 - val_loss: 0.8003 - val_accuracy: 0.4762
Epoch 64/300
2/2 [==============================] - 0s 115ms/step - loss: 0.0054 - accuracy: 1.0000 - val_loss: 0.8042 - val_accuracy: 0.4762
Epoch 65/300
2/2 [==============================] - 0s 106ms/step - loss: 0.0049 - accuracy: 1.0000 - val_loss: 0.8032 - val_accuracy: 0.4921
Epoch 66/300
2/2 [==============================] - 0s 95ms/step - loss: 0.0045 - accuracy: 1.0000 - val_loss: 0.8098 - val_accuracy: 0.4762
Epoch 67/300
2/2 [==============================] - 0s 112ms/step - loss: 0.0040 - accuracy: 1.0000 - val_loss: 0.8145 - val_accuracy: 0.4762
Epoch 68/300
2/2 [==============================] - 0s 89ms/step - loss: 0.0036 - accuracy: 1.0000 - val_loss: 0.8176 - val_accuracy: 0.4762
Epoch 69/300
2/2 [==============================] - 0s 98ms/step - loss: 0.0033 - accuracy: 1.0000 - val_loss: 0.8199 - val_accuracy: 0.4762
Epoch 70/300
2/2 [==============================] - 0s 100ms/step - loss: 0.0030 - accuracy: 1.0000 - val_loss: 0.8239 - val_accuracy: 0.4762
Epoch 71/300
2/2 [==============================] - 0s 106ms/step - loss: 0.0027 - accuracy: 1.0000 - val_loss: 0.8243 - val_accuracy: 0.4762
Epoch 72/300
2/2 [==============================] - 0s 107ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 0.8320 - val_accuracy: 0.4762
Epoch 73/300
2/2 [==============================] - 0s 96ms/step - loss: 0.0022 - accuracy: 1.0000 - val_loss: 0.8372 - val_accuracy: 0.4921
Epoch 74/300
2/2 [==============================] - 0s 111ms/step - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.8397 - val_accuracy: 0.4762
Epoch 75/300
2/2 [==============================] - 0s 101ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.8396 - val_accuracy: 0.4921
Epoch 76/300
2/2 [==============================] - 0s 104ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.8472 - val_accuracy: 0.4762
Epoch 77/300
2/2 [==============================] - 0s 100ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.8527 - val_accuracy: 0.4762
Epoch 78/300
2/2 [==============================] - 0s 99ms/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 0.8569 - val_accuracy: 0.4762
Epoch 79/300
2/2 [==============================] - 0s 106ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 0.8603 - val_accuracy: 0.4762
Epoch 80/300
2/2 [==============================] - 0s 104ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 0.8631 - val_accuracy: 0.4762
Epoch 81/300
2/2 [==============================] - 0s 112ms/step - loss: 9.8802e-04 - accuracy: 1.0000 - val_loss: 0.8703 - val_accuracy: 0.4762
Epoch 82/300
2/2 [==============================] - 0s 93ms/step - loss: 8.9309e-04 - accuracy: 1.0000 - val_loss: 0.8731 - val_accuracy: 0.4762
Epoch 83/300
2/2 [==============================] - 0s 116ms/step - loss: 8.0808e-04 - accuracy: 1.0000 - val_loss: 0.8772 - val_accuracy: 0.4762
Epoch 84/300
2/2 [==============================] - 0s 115ms/step - loss: 7.3113e-04 - accuracy: 1.0000 - val_loss: 0.8826 - val_accuracy: 0.4762
Epoch 85/300
2/2 [==============================] - 0s 102ms/step - loss: 6.6314e-04 - accuracy: 1.0000 - val_loss: 0.8904 - val_accuracy: 0.4762
Epoch 86/300
2/2 [==============================] - 0s 98ms/step - loss: 5.9818e-04 - accuracy: 1.0000 - val_loss: 0.8930 - val_accuracy: 0.4762
Epoch 87/300
2/2 [==============================] - 0s 96ms/step - loss: 5.4154e-04 - accuracy: 1.0000 - val_loss: 0.8986 - val_accuracy: 0.4762
Epoch 88/300
2/2 [==============================] - 0s 107ms/step - loss: 4.9323e-04 - accuracy: 1.0000 - val_loss: 0.8946 - val_accuracy: 0.4762
Epoch 89/300
2/2 [==============================] - 0s 109ms/step - loss: 4.4490e-04 - accuracy: 1.0000 - val_loss: 0.9057 - val_accuracy: 0.4762
Epoch 90/300
2/2 [==============================] - 0s 99ms/step - loss: 4.0121e-04 - accuracy: 1.0000 - val_loss: 0.9081 - val_accuracy: 0.4762
Epoch 91/300
2/2 [==============================] - 0s 110ms/step - loss: 3.6362e-04 - accuracy: 1.0000 - val_loss: 0.9111 - val_accuracy: 0.4921
Epoch 92/300
2/2 [==============================] - 0s 108ms/step - loss: 3.2976e-04 - accuracy: 1.0000 - val_loss: 0.9147 - val_accuracy: 0.4921
Epoch 93/300
2/2 [==============================] - 0s 100ms/step - loss: 2.9800e-04 - accuracy: 1.0000 - val_loss: 0.9243 - val_accuracy: 0.4921
Epoch 94/300
2/2 [==============================] - 0s 95ms/step - loss: 2.7033e-04 - accuracy: 1.0000 - val_loss: 0.9298 - val_accuracy: 0.4921
Epoch 95/300
2/2 [==============================] - 0s 108ms/step - loss: 2.4468e-04 - accuracy: 1.0000 - val_loss: 0.9316 - val_accuracy: 0.4921
Epoch 96/300
2/2 [==============================] - 0s 95ms/step - loss: 2.2198e-04 - accuracy: 1.0000 - val_loss: 0.9355 - val_accuracy: 0.4921
Epoch 97/300
2/2 [==============================] - 0s 98ms/step - loss: 2.0102e-04 - accuracy: 1.0000 - val_loss: 0.9408 - val_accuracy: 0.4921
Epoch 98/300
2/2 [==============================] - 0s 104ms/step - loss: 1.8235e-04 - accuracy: 1.0000 - val_loss: 0.9453 - val_accuracy: 0.4921
Epoch 99/300
2/2 [==============================] - 0s 102ms/step - loss: 1.6540e-04 - accuracy: 1.0000 - val_loss: 0.9504 - val_accuracy: 0.4921
Epoch 100/300
2/2 [==============================] - 0s 109ms/step - loss: 1.5036e-04 - accuracy: 1.0000 - val_loss: 0.9576 - val_accuracy: 0.4921
Epoch 101/300
2/2 [==============================] - 0s 101ms/step - loss: 1.3672e-04 - accuracy: 1.0000 - val_loss: 0.9608 - val_accuracy: 0.4762
Epoch 102/300
2/2 [==============================] - 0s 169ms/step - loss: 1.2434e-04 - accuracy: 1.0000 - val_loss: 0.9633 - val_accuracy: 0.4921
Epoch 103/300
2/2 [==============================] - 0s 111ms/step - loss: 1.1316e-04 - accuracy: 1.0000 - val_loss: 0.9717 - val_accuracy: 0.4921
Epoch 104/300
2/2 [==============================] - 0s 104ms/step - loss: 1.0295e-04 - accuracy: 1.0000 - val_loss: 0.9754 - val_accuracy: 0.4762
Epoch 105/300
2/2 [==============================] - 0s 93ms/step - loss: 9.3569e-05 - accuracy: 1.0000 - val_loss: 0.9745 - val_accuracy: 0.4921
Epoch 106/300
2/2 [==============================] - 0s 108ms/step - loss: 8.5293e-05 - accuracy: 1.0000 - val_loss: 0.9812 - val_accuracy: 0.4921
Epoch 107/300
2/2 [==============================] - 0s 108ms/step - loss: 7.7600e-05 - accuracy: 1.0000 - val_loss: 0.9850 - val_accuracy: 0.4921
Epoch 108/300
2/2 [==============================] - 0s 100ms/step - loss: 7.0840e-05 - accuracy: 1.0000 - val_loss: 0.9902 - val_accuracy: 0.4921
Epoch 109/300
2/2 [==============================] - 0s 108ms/step - loss: 6.4863e-05 - accuracy: 1.0000 - val_loss: 0.9978 - val_accuracy: 0.4762
Epoch 110/300
2/2 [==============================] - 0s 93ms/step - loss: 5.8923e-05 - accuracy: 1.0000 - val_loss: 0.9971 - val_accuracy: 0.4921
Epoch 111/300
2/2 [==============================] - 0s 95ms/step - loss: 5.3941e-05 - accuracy: 1.0000 - val_loss: 1.0051 - val_accuracy: 0.4921
Epoch 112/300
2/2 [==============================] - 0s 96ms/step - loss: 4.9333e-05 - accuracy: 1.0000 - val_loss: 1.0075 - val_accuracy: 0.4762
Epoch 113/300
2/2 [==============================] - 0s 110ms/step - loss: 4.5104e-05 - accuracy: 1.0000 - val_loss: 1.0091 - val_accuracy: 0.4762
Epoch 114/300
2/2 [==============================] - 0s 105ms/step - loss: 4.1212e-05 - accuracy: 1.0000 - val_loss: 1.0158 - val_accuracy: 0.4762
Epoch 115/300
2/2 [==============================] - 0s 93ms/step - loss: 3.7784e-05 - accuracy: 1.0000 - val_loss: 1.0196 - val_accuracy: 0.4921
Epoch 116/300
2/2 [==============================] - 0s 106ms/step - loss: 3.4619e-05 - accuracy: 1.0000 - val_loss: 1.0234 - val_accuracy: 0.4921
Epoch 117/300
2/2 [==============================] - 0s 100ms/step - loss: 3.1767e-05 - accuracy: 1.0000 - val_loss: 1.0280 - val_accuracy: 0.4921
Epoch 118/300
2/2 [==============================] - 0s 90ms/step - loss: 2.9410e-05 - accuracy: 1.0000 - val_loss: 1.0365 - val_accuracy: 0.4603
Epoch 119/300
2/2 [==============================] - 0s 110ms/step - loss: 2.6818e-05 - accuracy: 1.0000 - val_loss: 1.0373 - val_accuracy: 0.4921
Epoch 120/300
2/2 [==============================] - 0s 101ms/step - loss: 2.4676e-05 - accuracy: 1.0000 - val_loss: 1.0403 - val_accuracy: 0.4921
Epoch 121/300
2/2 [==============================] - 0s 105ms/step - loss: 2.2689e-05 - accuracy: 1.0000 - val_loss: 1.0454 - val_accuracy: 0.4921
Epoch 122/300
2/2 [==============================] - 0s 111ms/step - loss: 2.0894e-05 - accuracy: 1.0000 - val_loss: 1.0505 - val_accuracy: 0.4921
Epoch 123/300
2/2 [==============================] - 0s 113ms/step - loss: 1.9331e-05 - accuracy: 1.0000 - val_loss: 1.0546 - val_accuracy: 0.4921
Epoch 124/300
2/2 [==============================] - 0s 112ms/step - loss: 1.7779e-05 - accuracy: 1.0000 - val_loss: 1.0540 - val_accuracy: 0.4921
Epoch 125/300
2/2 [==============================] - 0s 100ms/step - loss: 1.6441e-05 - accuracy: 1.0000 - val_loss: 1.0599 - val_accuracy: 0.4921
Epoch 126/300
2/2 [==============================] - 0s 102ms/step - loss: 1.5261e-05 - accuracy: 1.0000 - val_loss: 1.0658 - val_accuracy: 0.4921
Epoch 127/300
2/2 [==============================] - 0s 98ms/step - loss: 1.4100e-05 - accuracy: 1.0000 - val_loss: 1.0689 - val_accuracy: 0.4921
Epoch 128/300
2/2 [==============================] - 0s 111ms/step - loss: 1.3024e-05 - accuracy: 1.0000 - val_loss: 1.0716 - val_accuracy: 0.4921
Epoch 129/300
2/2 [==============================] - 0s 97ms/step - loss: 1.2121e-05 - accuracy: 1.0000 - val_loss: 1.0789 - val_accuracy: 0.4921
Epoch 130/300
2/2 [==============================] - 0s 101ms/step - loss: 1.1203e-05 - accuracy: 1.0000 - val_loss: 1.0790 - val_accuracy: 0.4921
Epoch 131/300
2/2 [==============================] - 0s 97ms/step - loss: 1.0445e-05 - accuracy: 1.0000 - val_loss: 1.0862 - val_accuracy: 0.4921
Epoch 132/300
2/2 [==============================] - 0s 99ms/step - loss: 9.6806e-06 - accuracy: 1.0000 - val_loss: 1.0863 - val_accuracy: 0.4921
Epoch 133/300
2/2 [==============================] - 0s 103ms/step - loss: 9.0003e-06 - accuracy: 1.0000 - val_loss: 1.0893 - val_accuracy: 0.5079
Epoch 134/300
2/2 [==============================] - 0s 97ms/step - loss: 8.3975e-06 - accuracy: 1.0000 - val_loss: 1.0904 - val_accuracy: 0.5079
Epoch 135/300
2/2 [==============================] - 0s 96ms/step - loss: 7.8288e-06 - accuracy: 1.0000 - val_loss: 1.0960 - val_accuracy: 0.4921
Epoch 136/300
2/2 [==============================] - 0s 112ms/step - loss: 7.2930e-06 - accuracy: 1.0000 - val_loss: 1.1000 - val_accuracy: 0.4921
Epoch 137/300
2/2 [==============================] - 0s 120ms/step - loss: 6.8107e-06 - accuracy: 1.0000 - val_loss: 1.1039 - val_accuracy: 0.4921
Epoch 138/300
2/2 [==============================] - 0s 111ms/step - loss: 6.3673e-06 - accuracy: 1.0000 - val_loss: 1.1049 - val_accuracy: 0.5238
Epoch 139/300
2/2 [==============================] - 0s 106ms/step - loss: 5.9509e-06 - accuracy: 1.0000 - val_loss: 1.1106 - val_accuracy: 0.5079
Epoch 140/300
2/2 [==============================] - 0s 96ms/step - loss: 5.5765e-06 - accuracy: 1.0000 - val_loss: 1.1106 - val_accuracy: 0.5238
Epoch 141/300
2/2 [==============================] - 0s 96ms/step - loss: 5.2252e-06 - accuracy: 1.0000 - val_loss: 1.1178 - val_accuracy: 0.4921
Epoch 142/300
2/2 [==============================] - 0s 113ms/step - loss: 4.8940e-06 - accuracy: 1.0000 - val_loss: 1.1184 - val_accuracy: 0.5238
Epoch 143/300
2/2 [==============================] - 0s 112ms/step - loss: 4.5941e-06 - accuracy: 1.0000 - val_loss: 1.1214 - val_accuracy: 0.5238
Epoch 144/300
2/2 [==============================] - 0s 105ms/step - loss: 4.3081e-06 - accuracy: 1.0000 - val_loss: 1.1261 - val_accuracy: 0.5238
Epoch 145/300
2/2 [==============================] - 0s 107ms/step - loss: 4.0447e-06 - accuracy: 1.0000 - val_loss: 1.1287 - val_accuracy: 0.5238
Epoch 146/300
2/2 [==============================] - 0s 115ms/step - loss: 3.8081e-06 - accuracy: 1.0000 - val_loss: 1.1324 - val_accuracy: 0.5079
Epoch 147/300
2/2 [==============================] - 0s 103ms/step - loss: 3.5857e-06 - accuracy: 1.0000 - val_loss: 1.1339 - val_accuracy: 0.5238
Epoch 148/300
2/2 [==============================] - 0s 105ms/step - loss: 3.3713e-06 - accuracy: 1.0000 - val_loss: 1.1369 - val_accuracy: 0.5238
Epoch 149/300
2/2 [==============================] - 0s 103ms/step - loss: 3.1789e-06 - accuracy: 1.0000 - val_loss: 1.1389 - val_accuracy: 0.5238
Epoch 150/300
2/2 [==============================] - 0s 100ms/step - loss: 2.9878e-06 - accuracy: 1.0000 - val_loss: 1.1437 - val_accuracy: 0.5238
Epoch 151/300
2/2 [==============================] - 0s 110ms/step - loss: 2.8213e-06 - accuracy: 1.0000 - val_loss: 1.1471 - val_accuracy: 0.5238
Epoch 152/300
2/2 [==============================] - 0s 107ms/step - loss: 2.6596e-06 - accuracy: 1.0000 - val_loss: 1.1497 - val_accuracy: 0.5238
Epoch 153/300
2/2 [==============================] - 0s 112ms/step - loss: 2.5139e-06 - accuracy: 1.0000 - val_loss: 1.1530 - val_accuracy: 0.5238
Epoch 154/300
2/2 [==============================] - 0s 107ms/step - loss: 2.3737e-06 - accuracy: 1.0000 - val_loss: 1.1552 - val_accuracy: 0.5238
Epoch 155/300
2/2 [==============================] - 0s 109ms/step - loss: 2.2432e-06 - accuracy: 1.0000 - val_loss: 1.1576 - val_accuracy: 0.5238
Epoch 156/300
2/2 [==============================] - 0s 103ms/step - loss: 2.1233e-06 - accuracy: 1.0000 - val_loss: 1.1600 - val_accuracy: 0.5238
Epoch 157/300
2/2 [==============================] - 0s 104ms/step - loss: 2.0128e-06 - accuracy: 1.0000 - val_loss: 1.1622 - val_accuracy: 0.5238
Epoch 158/300
2/2 [==============================] - 0s 111ms/step - loss: 1.9005e-06 - accuracy: 1.0000 - val_loss: 1.1652 - val_accuracy: 0.5238
Epoch 159/300
2/2 [==============================] - 0s 97ms/step - loss: 1.8000e-06 - accuracy: 1.0000 - val_loss: 1.1693 - val_accuracy: 0.5238
Epoch 160/300
2/2 [==============================] - 0s 96ms/step - loss: 1.7059e-06 - accuracy: 1.0000 - val_loss: 1.1723 - val_accuracy: 0.5238
Epoch 161/300
2/2 [==============================] - 0s 111ms/step - loss: 1.6158e-06 - accuracy: 1.0000 - val_loss: 1.1753 - val_accuracy: 0.5238
Epoch 162/300
2/2 [==============================] - 0s 98ms/step - loss: 1.5350e-06 - accuracy: 1.0000 - val_loss: 1.1785 - val_accuracy: 0.5238
Epoch 163/300
2/2 [==============================] - 0s 110ms/step - loss: 1.4543e-06 - accuracy: 1.0000 - val_loss: 1.1811 - val_accuracy: 0.5238
Epoch 164/300
2/2 [==============================] - 0s 95ms/step - loss: 1.3814e-06 - accuracy: 1.0000 - val_loss: 1.1821 - val_accuracy: 0.5238
Epoch 165/300
2/2 [==============================] - 0s 101ms/step - loss: 1.3108e-06 - accuracy: 1.0000 - val_loss: 1.1852 - val_accuracy: 0.5238
Epoch 166/300
2/2 [==============================] - 0s 107ms/step - loss: 1.2444e-06 - accuracy: 1.0000 - val_loss: 1.1882 - val_accuracy: 0.5238
Epoch 167/300
2/2 [==============================] - 0s 111ms/step - loss: 1.1827e-06 - accuracy: 1.0000 - val_loss: 1.1908 - val_accuracy: 0.5238
Epoch 168/300
2/2 [==============================] - 0s 102ms/step - loss: 1.1261e-06 - accuracy: 1.0000 - val_loss: 1.1942 - val_accuracy: 0.5238
Epoch 169/300
2/2 [==============================] - 0s 134ms/step - loss: 1.0695e-06 - accuracy: 1.0000 - val_loss: 1.1959 - val_accuracy: 0.5238
Epoch 170/300
2/2 [==============================] - 0s 101ms/step - loss: 1.0178e-06 - accuracy: 1.0000 - val_loss: 1.1991 - val_accuracy: 0.5238
Epoch 171/300
2/2 [==============================] - 0s 98ms/step - loss: 9.6823e-07 - accuracy: 1.0000 - val_loss: 1.2011 - val_accuracy: 0.5238
Epoch 172/300
2/2 [==============================] - 0s 111ms/step - loss: 9.2240e-07 - accuracy: 1.0000 - val_loss: 1.2041 - val_accuracy: 0.5238
Epoch 173/300
2/2 [==============================] - 0s 97ms/step - loss: 8.8017e-07 - accuracy: 1.0000 - val_loss: 1.2053 - val_accuracy: 0.5238
Epoch 174/300
2/2 [==============================] - 0s 112ms/step - loss: 8.3817e-07 - accuracy: 1.0000 - val_loss: 1.2076 - val_accuracy: 0.5238
Epoch 175/300
2/2 [==============================] - 0s 105ms/step - loss: 7.9736e-07 - accuracy: 1.0000 - val_loss: 1.2104 - val_accuracy: 0.5238
Epoch 176/300
2/2 [==============================] - 0s 112ms/step - loss: 7.6043e-07 - accuracy: 1.0000 - val_loss: 1.2126 - val_accuracy: 0.5238
Epoch 177/300
2/2 [==============================] - 0s 125ms/step - loss: 7.2655e-07 - accuracy: 1.0000 - val_loss: 1.2161 - val_accuracy: 0.5238
Epoch 178/300
2/2 [==============================] - 0s 96ms/step - loss: 6.9236e-07 - accuracy: 1.0000 - val_loss: 1.2179 - val_accuracy: 0.5238
Epoch 179/300
2/2 [==============================] - 0s 115ms/step - loss: 6.6064e-07 - accuracy: 1.0000 - val_loss: 1.2202 - val_accuracy: 0.5238
Epoch 180/300
2/2 [==============================] - 0s 114ms/step - loss: 6.3075e-07 - accuracy: 1.0000 - val_loss: 1.2229 - val_accuracy: 0.5238
Epoch 181/300
2/2 [==============================] - 0s 101ms/step - loss: 6.0257e-07 - accuracy: 1.0000 - val_loss: 1.2248 - val_accuracy: 0.5238
Epoch 182/300
2/2 [==============================] - 0s 110ms/step - loss: 5.7553e-07 - accuracy: 1.0000 - val_loss: 1.2271 - val_accuracy: 0.5238
Epoch 183/300
2/2 [==============================] - 0s 109ms/step - loss: 5.5002e-07 - accuracy: 1.0000 - val_loss: 1.2294 - val_accuracy: 0.5238
Epoch 184/300
2/2 [==============================] - 0s 109ms/step - loss: 5.2565e-07 - accuracy: 1.0000 - val_loss: 1.2320 - val_accuracy: 0.5238
Epoch 185/300
2/2 [==============================] - 0s 105ms/step - loss: 5.0281e-07 - accuracy: 1.0000 - val_loss: 1.2342 - val_accuracy: 0.5238
Epoch 186/300
2/2 [==============================] - 0s 118ms/step - loss: 4.8125e-07 - accuracy: 1.0000 - val_loss: 1.2358 - val_accuracy: 0.5238
Epoch 187/300
2/2 [==============================] - 0s 105ms/step - loss: 4.6015e-07 - accuracy: 1.0000 - val_loss: 1.2384 - val_accuracy: 0.5238
Epoch 188/300
2/2 [==============================] - 0s 108ms/step - loss: 4.4033e-07 - accuracy: 1.0000 - val_loss: 1.2407 - val_accuracy: 0.5238
Epoch 189/300
2/2 [==============================] - 0s 109ms/step - loss: 4.2143e-07 - accuracy: 1.0000 - val_loss: 1.2433 - val_accuracy: 0.5238
Epoch 190/300
2/2 [==============================] - 0s 101ms/step - loss: 4.0375e-07 - accuracy: 1.0000 - val_loss: 1.2455 - val_accuracy: 0.5238
Epoch 191/300
2/2 [==============================] - 0s 95ms/step - loss: 3.8757e-07 - accuracy: 1.0000 - val_loss: 1.2484 - val_accuracy: 0.5238
Epoch 192/300
2/2 [==============================] - 0s 98ms/step - loss: 3.7103e-07 - accuracy: 1.0000 - val_loss: 1.2500 - val_accuracy: 0.5238
Epoch 193/300
2/2 [==============================] - 0s 104ms/step - loss: 3.5562e-07 - accuracy: 1.0000 - val_loss: 1.2526 - val_accuracy: 0.5238
Epoch 194/300
2/2 [==============================] - 0s 107ms/step - loss: 3.4187e-07 - accuracy: 1.0000 - val_loss: 1.2541 - val_accuracy: 0.5238
Epoch 195/300
2/2 [==============================] - 0s 112ms/step - loss: 3.2747e-07 - accuracy: 1.0000 - val_loss: 1.2563 - val_accuracy: 0.5238
Epoch 196/300
2/2 [==============================] - 0s 106ms/step - loss: 3.1424e-07 - accuracy: 1.0000 - val_loss: 1.2585 - val_accuracy: 0.5238
Epoch 197/300
2/2 [==============================] - 0s 100ms/step - loss: 3.0166e-07 - accuracy: 1.0000 - val_loss: 1.2605 - val_accuracy: 0.5238
Epoch 198/300
2/2 [==============================] - 0s 112ms/step - loss: 2.8977e-07 - accuracy: 1.0000 - val_loss: 1.2626 - val_accuracy: 0.5238
Epoch 199/300
2/2 [==============================] - 0s 108ms/step - loss: 2.7836e-07 - accuracy: 1.0000 - val_loss: 1.2648 - val_accuracy: 0.5238
Epoch 200/300
2/2 [==============================] - 0s 96ms/step - loss: 2.6771e-07 - accuracy: 1.0000 - val_loss: 1.2669 - val_accuracy: 0.5238
Epoch 201/300
2/2 [==============================] - 0s 109ms/step - loss: 2.5739e-07 - accuracy: 1.0000 - val_loss: 1.2690 - val_accuracy: 0.5238
Epoch 202/300
2/2 [==============================] - 0s 97ms/step - loss: 2.4754e-07 - accuracy: 1.0000 - val_loss: 1.2713 - val_accuracy: 0.5238
Epoch 203/300
2/2 [==============================] - 0s 106ms/step - loss: 2.3821e-07 - accuracy: 1.0000 - val_loss: 1.2732 - val_accuracy: 0.5238
Epoch 204/300
2/2 [==============================] - 0s 104ms/step - loss: 2.2946e-07 - accuracy: 1.0000 - val_loss: 1.2755 - val_accuracy: 0.5238
Epoch 205/300
2/2 [==============================] - 0s 95ms/step - loss: 2.2083e-07 - accuracy: 1.0000 - val_loss: 1.2775 - val_accuracy: 0.5238
Epoch 206/300
2/2 [==============================] - 0s 113ms/step - loss: 2.1284e-07 - accuracy: 1.0000 - val_loss: 1.2791 - val_accuracy: 0.5238
Epoch 207/300
2/2 [==============================] - 0s 111ms/step - loss: 2.0492e-07 - accuracy: 1.0000 - val_loss: 1.2812 - val_accuracy: 0.5238
Epoch 208/300
2/2 [==============================] - 0s 96ms/step - loss: 1.9746e-07 - accuracy: 1.0000 - val_loss: 1.2830 - val_accuracy: 0.5238
Epoch 209/300
2/2 [==============================] - 0s 102ms/step - loss: 1.9036e-07 - accuracy: 1.0000 - val_loss: 1.2851 - val_accuracy: 0.5238
Epoch 210/300
2/2 [==============================] - 0s 112ms/step - loss: 1.8366e-07 - accuracy: 1.0000 - val_loss: 1.2868 - val_accuracy: 0.5238
Epoch 211/300
2/2 [==============================] - 0s 97ms/step - loss: 1.7720e-07 - accuracy: 1.0000 - val_loss: 1.2886 - val_accuracy: 0.5238
Epoch 212/300
2/2 [==============================] - 0s 103ms/step - loss: 1.7099e-07 - accuracy: 1.0000 - val_loss: 1.2908 - val_accuracy: 0.5238
Epoch 213/300
2/2 [==============================] - 0s 95ms/step - loss: 1.6521e-07 - accuracy: 1.0000 - val_loss: 1.2926 - val_accuracy: 0.5238
Epoch 214/300
2/2 [==============================] - 0s 95ms/step - loss: 1.5969e-07 - accuracy: 1.0000 - val_loss: 1.2944 - val_accuracy: 0.5238
Epoch 215/300
2/2 [==============================] - 0s 113ms/step - loss: 1.5424e-07 - accuracy: 1.0000 - val_loss: 1.2963 - val_accuracy: 0.5238
Epoch 216/300
2/2 [==============================] - 0s 105ms/step - loss: 1.4913e-07 - accuracy: 1.0000 - val_loss: 1.2981 - val_accuracy: 0.5238
Epoch 217/300
2/2 [==============================] - 0s 104ms/step - loss: 1.4433e-07 - accuracy: 1.0000 - val_loss: 1.2996 - val_accuracy: 0.5238
Epoch 218/300
2/2 [==============================] - 0s 95ms/step - loss: 1.3953e-07 - accuracy: 1.0000 - val_loss: 1.3014 - val_accuracy: 0.5238
Epoch 219/300
2/2 [==============================] - 0s 112ms/step - loss: 1.3506e-07 - accuracy: 1.0000 - val_loss: 1.3031 - val_accuracy: 0.5238
Epoch 220/300
2/2 [==============================] - 0s 105ms/step - loss: 1.3076e-07 - accuracy: 1.0000 - val_loss: 1.3050 - val_accuracy: 0.5238
Epoch 221/300
2/2 [==============================] - 0s 105ms/step - loss: 1.2660e-07 - accuracy: 1.0000 - val_loss: 1.3067 - val_accuracy: 0.5238
Epoch 222/300
2/2 [==============================] - 0s 96ms/step - loss: 1.2264e-07 - accuracy: 1.0000 - val_loss: 1.3084 - val_accuracy: 0.5238
Epoch 223/300
2/2 [==============================] - 0s 97ms/step - loss: 1.1884e-07 - accuracy: 1.0000 - val_loss: 1.3101 - val_accuracy: 0.5238
Epoch 224/300
2/2 [==============================] - 0s 97ms/step - loss: 1.1527e-07 - accuracy: 1.0000 - val_loss: 1.3118 - val_accuracy: 0.5238
Epoch 225/300
2/2 [==============================] - 0s 116ms/step - loss: 1.1175e-07 - accuracy: 1.0000 - val_loss: 1.3134 - val_accuracy: 0.5238
Epoch 226/300
2/2 [==============================] - 0s 98ms/step - loss: 1.0841e-07 - accuracy: 1.0000 - val_loss: 1.3148 - val_accuracy: 0.5238
Epoch 227/300
2/2 [==============================] - 0s 116ms/step - loss: 1.0515e-07 - accuracy: 1.0000 - val_loss: 1.3160 - val_accuracy: 0.5079
Epoch 228/300
2/2 [==============================] - 0s 112ms/step - loss: 1.0204e-07 - accuracy: 1.0000 - val_loss: 1.3173 - val_accuracy: 0.5079
Epoch 229/300
2/2 [==============================] - 0s 110ms/step - loss: 9.9040e-08 - accuracy: 1.0000 - val_loss: 1.3189 - val_accuracy: 0.5079
Epoch 230/300
2/2 [==============================] - 0s 165ms/step - loss: 9.6221e-08 - accuracy: 1.0000 - val_loss: 1.3203 - val_accuracy: 0.5079
Epoch 231/300
2/2 [==============================] - 0s 147ms/step - loss: 9.3481e-08 - accuracy: 1.0000 - val_loss: 1.3215 - val_accuracy: 0.5079
Epoch 232/300
2/2 [==============================] - 0s 113ms/step - loss: 9.0815e-08 - accuracy: 1.0000 - val_loss: 1.3232 - val_accuracy: 0.5079
Epoch 233/300
2/2 [==============================] - 0s 98ms/step - loss: 8.8329e-08 - accuracy: 1.0000 - val_loss: 1.3247 - val_accuracy: 0.5079
Epoch 234/300
2/2 [==============================] - 0s 101ms/step - loss: 8.5932e-08 - accuracy: 1.0000 - val_loss: 1.3261 - val_accuracy: 0.5079
Epoch 235/300
2/2 [==============================] - 0s 100ms/step - loss: 8.3620e-08 - accuracy: 1.0000 - val_loss: 1.3276 - val_accuracy: 0.5079
Epoch 236/300
2/2 [==============================] - 0s 97ms/step - loss: 8.1393e-08 - accuracy: 1.0000 - val_loss: 1.3291 - val_accuracy: 0.5079
Epoch 237/300
2/2 [==============================] - 0s 96ms/step - loss: 7.9247e-08 - accuracy: 1.0000 - val_loss: 1.3307 - val_accuracy: 0.5079
Epoch 238/300
2/2 [==============================] - 0s 98ms/step - loss: 7.7210e-08 - accuracy: 1.0000 - val_loss: 1.3319 - val_accuracy: 0.5079
Epoch 239/300
2/2 [==============================] - 0s 98ms/step - loss: 7.5221e-08 - accuracy: 1.0000 - val_loss: 1.3332 - val_accuracy: 0.5079
Epoch 240/300
2/2 [==============================] - 0s 105ms/step - loss: 7.3306e-08 - accuracy: 1.0000 - val_loss: 1.3347 - val_accuracy: 0.5079
Epoch 241/300
2/2 [==============================] - 0s 97ms/step - loss: 7.1480e-08 - accuracy: 1.0000 - val_loss: 1.3363 - val_accuracy: 0.5079
Epoch 242/300
2/2 [==============================] - 0s 111ms/step - loss: 6.9743e-08 - accuracy: 1.0000 - val_loss: 1.3376 - val_accuracy: 0.5079
Epoch 243/300
2/2 [==============================] - 0s 102ms/step - loss: 6.8036e-08 - accuracy: 1.0000 - val_loss: 1.3389 - val_accuracy: 0.5079
Epoch 244/300
2/2 [==============================] - 0s 102ms/step - loss: 6.6381e-08 - accuracy: 1.0000 - val_loss: 1.3405 - val_accuracy: 0.5079
Epoch 245/300
2/2 [==============================] - 0s 95ms/step - loss: 6.4841e-08 - accuracy: 1.0000 - val_loss: 1.3416 - val_accuracy: 0.5079
Epoch 246/300
2/2 [==============================] - 0s 113ms/step - loss: 6.3305e-08 - accuracy: 1.0000 - val_loss: 1.3434 - val_accuracy: 0.5079
Epoch 247/300
2/2 [==============================] - 0s 98ms/step - loss: 6.1897e-08 - accuracy: 1.0000 - val_loss: 1.3449 - val_accuracy: 0.5238
Epoch 248/300
2/2 [==============================] - 0s 101ms/step - loss: 6.0514e-08 - accuracy: 1.0000 - val_loss: 1.3465 - val_accuracy: 0.5238
Epoch 249/300
2/2 [==============================] - 0s 113ms/step - loss: 5.9202e-08 - accuracy: 1.0000 - val_loss: 1.3481 - val_accuracy: 0.5238
Epoch 250/300
2/2 [==============================] - 0s 115ms/step - loss: 5.7925e-08 - accuracy: 1.0000 - val_loss: 1.3496 - val_accuracy: 0.5238
Epoch 251/300
2/2 [==============================] - 0s 107ms/step - loss: 5.6728e-08 - accuracy: 1.0000 - val_loss: 1.3513 - val_accuracy: 0.5238
Epoch 252/300
2/2 [==============================] - 0s 96ms/step - loss: 5.5587e-08 - accuracy: 1.0000 - val_loss: 1.3530 - val_accuracy: 0.5238
Epoch 253/300
2/2 [==============================] - 0s 94ms/step - loss: 5.4455e-08 - accuracy: 1.0000 - val_loss: 1.3547 - val_accuracy: 0.5238
Epoch 254/300
2/2 [==============================] - 0s 97ms/step - loss: 5.3408e-08 - accuracy: 1.0000 - val_loss: 1.3566 - val_accuracy: 0.5238
Epoch 255/300
2/2 [==============================] - 0s 97ms/step - loss: 5.2389e-08 - accuracy: 1.0000 - val_loss: 1.3580 - val_accuracy: 0.5238
Epoch 256/300
2/2 [==============================] - 0s 100ms/step - loss: 5.1344e-08 - accuracy: 1.0000 - val_loss: 1.3591 - val_accuracy: 0.5238
Epoch 257/300
2/2 [==============================] - 0s 119ms/step - loss: 5.0338e-08 - accuracy: 1.0000 - val_loss: 1.3609 - val_accuracy: 0.5238
Epoch 258/300
2/2 [==============================] - 0s 94ms/step - loss: 4.9463e-08 - accuracy: 1.0000 - val_loss: 1.3625 - val_accuracy: 0.5238
Epoch 259/300
2/2 [==============================] - 0s 105ms/step - loss: 4.8530e-08 - accuracy: 1.0000 - val_loss: 1.3643 - val_accuracy: 0.5238
Epoch 260/300
2/2 [==============================] - 0s 106ms/step - loss: 4.7669e-08 - accuracy: 1.0000 - val_loss: 1.3655 - val_accuracy: 0.5238
Epoch 261/300
2/2 [==============================] - 0s 97ms/step - loss: 4.6811e-08 - accuracy: 1.0000 - val_loss: 1.3672 - val_accuracy: 0.5238
Epoch 262/300
2/2 [==============================] - 0s 111ms/step - loss: 4.6019e-08 - accuracy: 1.0000 - val_loss: 1.3684 - val_accuracy: 0.5238
Epoch 263/300
2/2 [==============================] - 0s 104ms/step - loss: 4.5192e-08 - accuracy: 1.0000 - val_loss: 1.3696 - val_accuracy: 0.5238
Epoch 264/300
2/2 [==============================] - 0s 100ms/step - loss: 4.4387e-08 - accuracy: 1.0000 - val_loss: 1.3708 - val_accuracy: 0.5238
Epoch 265/300
2/2 [==============================] - 0s 112ms/step - loss: 4.3638e-08 - accuracy: 1.0000 - val_loss: 1.3721 - val_accuracy: 0.5238
Epoch 266/300
2/2 [==============================] - 0s 107ms/step - loss: 4.2891e-08 - accuracy: 1.0000 - val_loss: 1.3734 - val_accuracy: 0.5238
Epoch 267/300
2/2 [==============================] - 0s 106ms/step - loss: 4.2157e-08 - accuracy: 1.0000 - val_loss: 1.3746 - val_accuracy: 0.5238
Epoch 268/300
2/2 [==============================] - 0s 120ms/step - loss: 4.1478e-08 - accuracy: 1.0000 - val_loss: 1.3763 - val_accuracy: 0.5238
Epoch 269/300
2/2 [==============================] - 0s 112ms/step - loss: 4.0841e-08 - accuracy: 1.0000 - val_loss: 1.3777 - val_accuracy: 0.5238
Epoch 270/300
2/2 [==============================] - 0s 109ms/step - loss: 4.0210e-08 - accuracy: 1.0000 - val_loss: 1.3791 - val_accuracy: 0.5238
Epoch 271/300
2/2 [==============================] - 0s 102ms/step - loss: 3.9600e-08 - accuracy: 1.0000 - val_loss: 1.3805 - val_accuracy: 0.5238
Epoch 272/300
2/2 [==============================] - 0s 121ms/step - loss: 3.9020e-08 - accuracy: 1.0000 - val_loss: 1.3816 - val_accuracy: 0.5238
Epoch 273/300
2/2 [==============================] - 0s 118ms/step - loss: 3.8417e-08 - accuracy: 1.0000 - val_loss: 1.3830 - val_accuracy: 0.5238
Epoch 274/300
2/2 [==============================] - 0s 118ms/step - loss: 3.7882e-08 - accuracy: 1.0000 - val_loss: 1.3839 - val_accuracy: 0.5238
Epoch 275/300
2/2 [==============================] - 0s 107ms/step - loss: 3.7272e-08 - accuracy: 1.0000 - val_loss: 1.3852 - val_accuracy: 0.5238
Epoch 276/300
2/2 [==============================] - 0s 110ms/step - loss: 3.6750e-08 - accuracy: 1.0000 - val_loss: 1.3867 - val_accuracy: 0.5238
Epoch 277/300
2/2 [==============================] - 0s 104ms/step - loss: 3.6246e-08 - accuracy: 1.0000 - val_loss: 1.3879 - val_accuracy: 0.5238
Epoch 278/300
2/2 [==============================] - 0s 97ms/step - loss: 3.5726e-08 - accuracy: 1.0000 - val_loss: 1.3889 - val_accuracy: 0.5238
Epoch 279/300
2/2 [==============================] - 0s 115ms/step - loss: 3.5207e-08 - accuracy: 1.0000 - val_loss: 1.3901 - val_accuracy: 0.5238
Epoch 280/300
2/2 [==============================] - 0s 105ms/step - loss: 3.4744e-08 - accuracy: 1.0000 - val_loss: 1.3915 - val_accuracy: 0.5238
Epoch 281/300
2/2 [==============================] - 0s 111ms/step - loss: 3.4299e-08 - accuracy: 1.0000 - val_loss: 1.3925 - val_accuracy: 0.5238
Epoch 282/300
2/2 [==============================] - 0s 107ms/step - loss: 3.3826e-08 - accuracy: 1.0000 - val_loss: 1.3936 - val_accuracy: 0.5238
Epoch 283/300
2/2 [==============================] - 0s 98ms/step - loss: 3.3381e-08 - accuracy: 1.0000 - val_loss: 1.3949 - val_accuracy: 0.5238
Epoch 284/300
2/2 [==============================] - 0s 113ms/step - loss: 3.2957e-08 - accuracy: 1.0000 - val_loss: 1.3962 - val_accuracy: 0.5238
Epoch 285/300
2/2 [==============================] - 0s 104ms/step - loss: 3.2561e-08 - accuracy: 1.0000 - val_loss: 1.3971 - val_accuracy: 0.5238
Epoch 286/300
2/2 [==============================] - 0s 117ms/step - loss: 3.2128e-08 - accuracy: 1.0000 - val_loss: 1.3981 - val_accuracy: 0.5238
Epoch 287/300
2/2 [==============================] - 0s 106ms/step - loss: 3.1729e-08 - accuracy: 1.0000 - val_loss: 1.3994 - val_accuracy: 0.5238
Epoch 288/300
2/2 [==============================] - 0s 170ms/step - loss: 3.1351e-08 - accuracy: 1.0000 - val_loss: 1.4007 - val_accuracy: 0.5238
Epoch 289/300
2/2 [==============================] - 0s 159ms/step - loss: 3.0982e-08 - accuracy: 1.0000 - val_loss: 1.4017 - val_accuracy: 0.5238
Epoch 290/300
2/2 [==============================] - 0s 101ms/step - loss: 3.0620e-08 - accuracy: 1.0000 - val_loss: 1.4028 - val_accuracy: 0.5238
Epoch 291/300
2/2 [==============================] - 0s 111ms/step - loss: 3.0252e-08 - accuracy: 1.0000 - val_loss: 1.4035 - val_accuracy: 0.5238
Epoch 292/300
2/2 [==============================] - 0s 113ms/step - loss: 2.9883e-08 - accuracy: 1.0000 - val_loss: 1.4049 - val_accuracy: 0.4921
Epoch 293/300
2/2 [==============================] - 0s 94ms/step - loss: 2.9558e-08 - accuracy: 1.0000 - val_loss: 1.4059 - val_accuracy: 0.4921
Epoch 294/300
2/2 [==============================] - 0s 104ms/step - loss: 2.9221e-08 - accuracy: 1.0000 - val_loss: 1.4069 - val_accuracy: 0.4921
Epoch 295/300
2/2 [==============================] - 0s 102ms/step - loss: 2.8905e-08 - accuracy: 1.0000 - val_loss: 1.4079 - val_accuracy: 0.4921
Epoch 296/300
2/2 [==============================] - 0s 100ms/step - loss: 2.8566e-08 - accuracy: 1.0000 - val_loss: 1.4090 - val_accuracy: 0.4921
Epoch 297/300
2/2 [==============================] - 0s 112ms/step - loss: 2.8271e-08 - accuracy: 1.0000 - val_loss: 1.4102 - val_accuracy: 0.4921
Epoch 298/300
2/2 [==============================] - 0s 98ms/step - loss: 2.7975e-08 - accuracy: 1.0000 - val_loss: 1.4109 - val_accuracy: 0.4921
Epoch 299/300
2/2 [==============================] - 0s 101ms/step - loss: 2.7662e-08 - accuracy: 1.0000 - val_loss: 1.4121 - val_accuracy: 0.4921
Epoch 300/300
2/2 [==============================] - 0s 105ms/step - loss: 2.7397e-08 - accuracy: 1.0000 - val_loss: 1.4133 - val_accuracy: 0.4921
2/2 [==============================] - 0s 13ms/step - loss: 1.4133 - accuracy: 0.4921
CNN: History results
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
<keras.callbacks.History object at 0x00000293392CE670>
[0.5421686768531799, 0.5421686768531799, 0.6465863585472107, 0.8594377636909485, 0.9397590160369873, 0.9518072009086609, 0.9518072009086609, 0.9598393440246582, 0.9598393440246582, 0.9638554453849792, 0.9718875288963318, 0.9718875288963318, 0.9759036302566528, 0.9799196720123291, 0.9919678568840027, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
[0.380952388048172, 0.380952388048172, 0.380952388048172, 0.380952388048172, 0.380952388048172, 0.3968254029750824, 0.4126984179019928, 0.3968254029750824, 0.4126984179019928, 0.4126984179019928, 0.4126984179019928, 0.4444444477558136, 0.460317462682724, 0.460317462682724, 0.4761904776096344, 0.460317462682724, 0.4761904776096344, 0.4761904776096344, 0.4920634925365448, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.5079365372657776, 0.4761904776096344, 0.4920634925365448, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4444444477558136, 0.4761904776096344, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.460317462682724, 0.4761904776096344, 0.4761904776096344, 0.460317462682724, 0.460317462682724, 0.4761904776096344, 0.460317462682724, 0.4761904776096344, 0.460317462682724, 0.460317462682724, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4920634925365448, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4920634925365448, 0.4761904776096344, 0.4920634925365448, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4761904776096344, 0.4920634925365448, 0.4920634925365448, 0.4761904776096344, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4761904776096344, 0.4920634925365448, 0.4920634925365448, 0.4761904776096344, 0.4761904776096344, 0.4761904776096344, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.460317462682724, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.5079365372657776, 0.5079365372657776, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.523809552192688, 0.5079365372657776, 0.523809552192688, 0.4920634925365448, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.5079365372657776, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.5079365372657776, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.523809552192688, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448, 0.4920634925365448]
[0.6901360154151917, 0.6437280178070068, 0.6083984375, 0.580118715763092, 0.5558525323867798, 0.532634437084198, 0.5100978016853333, 0.48829853534698486, 0.46702152490615845, 0.4469911754131317, 0.42678022384643555, 0.40718355774879456, 0.38677236437797546, 0.3673190176486969, 0.34812813997268677, 0.32910633087158203, 0.3106071949005127, 0.29261723160743713, 0.27554574608802795, 0.25827696919441223, 0.24228902161121368, 0.22701504826545715, 0.2117384672164917, 0.1974612921476364, 0.18418796360492706, 0.17057721316814423, 0.1585516631603241, 0.14704318344593048, 0.13596144318580627, 0.1259651631116867, 0.11608457565307617, 0.1073291003704071, 0.09884414076805115, 0.09079963713884354, 0.08331850171089172, 0.07668252289295197, 0.07032071053981781, 0.06443130970001221, 0.059155143797397614, 0.05406656861305237, 0.04936828091740608, 0.0451461523771286, 0.04124811291694641, 0.03757200390100479, 0.03422979637980461, 0.03120340406894684, 0.028425492346286774, 0.025931410491466522, 0.023571811616420746, 0.021390868350863457, 0.019451171159744263, 0.01772206462919712, 0.01607450097799301, 0.014609224162995815, 0.013241369277238846, 0.012007000856101513, 0.01088480930775404, 0.009883329272270203, 0.008951399475336075, 0.008124939166009426, 0.007355179637670517, 0.0066634961403906345, 0.0060127670876681805, 0.005437119398266077, 0.0049434551037848, 0.004472116939723492, 0.004036168102174997, 0.0036483383737504482, 0.003305066842585802, 0.002995538990944624, 0.002713111462071538, 0.002445003017783165, 0.002212083199992776, 0.0019980159122496843, 0.0018182025523856282, 0.00163730897475034, 0.001478087273426354, 0.0013343770988285542, 0.0012079760199412704, 0.0010932376608252525, 0.0009880210272967815, 0.0008930883486755192, 0.0008080808911472559, 0.0007311326917260885, 0.00066314049763605, 0.0005981828435324132, 0.0005415424820967019, 0.0004932254087179899, 0.00044490184518508613, 0.00040120960329659283, 0.00036361883394420147, 0.0003297613875474781, 0.0002979997079819441, 0.00027033130754716694, 0.00024468422634527087, 0.00022197808721102774, 0.0002010156458709389, 0.0001823494239943102, 0.00016540050273761153, 0.00015036055992823094, 0.0001367222866974771, 0.0001243374717887491, 0.00011316281597828493, 0.00010294907406205311, 9.356926602777094e-05, 8.529290062142536e-05, 7.75997104938142e-05, 7.083974924171343e-05, 6.486281927209347e-05, 5.892304398003034e-05, 5.39409629709553e-05, 4.933318996336311e-05, 4.5104388846084476e-05, 4.1211787902284414e-05, 3.7783967854920775e-05, 3.461912274360657e-05, 3.176703830831684e-05, 2.9409589842543937e-05, 2.6818050173460506e-05, 2.467617014190182e-05, 2.268871321575716e-05, 2.0894289264106192e-05, 1.9330998838995583e-05, 1.777940633473918e-05, 1.644093390495982e-05, 1.5260879081324674e-05, 1.4099697182246018e-05, 1.3023572137171868e-05, 1.2121403415221721e-05, 1.1202819223399274e-05, 1.0444736290082801e-05, 9.680573384684976e-06, 9.00033501238795e-06, 8.397450983466115e-06, 7.828760317352135e-06, 7.292995178431738e-06, 6.810693776060361e-06, 6.367318292177515e-06, 5.9508652157092e-06, 5.576511739491252e-06, 5.225173481449019e-06, 4.894032826996408e-06, 4.594069196173223e-06, 4.308091774873901e-06, 4.044740308017936e-06, 3.808137080341112e-06, 3.585739023037604e-06, 3.3713381526467856e-06, 3.1788881642569322e-06, 2.987789002872887e-06, 2.8213394216436427e-06, 2.659559413586976e-06, 2.513853360142093e-06, 2.3736522507533664e-06, 2.2432204787037335e-06, 2.1233249754004646e-06, 2.0128395590290893e-06, 1.9004982050319086e-06, 1.8000160935116583e-06, 1.7059336414604331e-06, 1.6158224980244995e-06, 1.5349581872214912e-06, 1.4543281849910272e-06, 1.3813744317303644e-06, 1.3107606946505257e-06, 1.2444038475223351e-06, 1.1826814443338662e-06, 1.1260940482316073e-06, 1.0694965340007911e-06, 1.0178091542911716e-06, 9.682325980975293e-07, 9.224027621712594e-07, 8.80173843142984e-07, 8.381707061744237e-07, 7.973645779202343e-07, 7.604291454299528e-07, 7.265494446073717e-07, 6.923557975824224e-07, 6.606442184420303e-07, 6.30749184438173e-07, 6.025681500432256e-07, 5.755276220043015e-07, 5.50021979961457e-07, 5.256482609183877e-07, 5.028139185014879e-07, 4.812482075067237e-07, 4.601477030519163e-07, 4.4033200197191036e-07, 4.2143113887505024e-07, 4.0374584386881907e-07, 3.8756900266889716e-07, 3.7103040995134506e-07, 3.556171463969804e-07, 3.418673486521584e-07, 3.274671200870216e-07, 3.14239088083923e-07, 3.016552625467739e-07, 2.8976569410588127e-07, 2.7835812943521887e-07, 2.677088843938691e-07, 2.5739436182448117e-07, 2.475409530688921e-07, 2.3821075956220739e-07, 2.29464305334659e-07, 2.208300173833777e-07, 2.1284301965351915e-07, 2.049205249932129e-07, 1.9745806412174716e-07, 1.9036353648971271e-07, 1.8366270637670823e-07, 1.7720495293360727e-07, 1.7098521709613124e-07, 1.6521416057457827e-07, 1.5969045819019811e-07, 1.5424483024162328e-07, 1.4913061363586166e-07, 1.4432582418066886e-07, 1.3952636379599426e-07, 1.3505855633866304e-07, 1.3076373761577997e-07, 1.2660487413995725e-07, 1.2263635085218993e-07, 1.1884477402190896e-07, 1.1527269094813164e-07, 1.117472834266664e-07, 1.0840890496410793e-07, 1.0515223891616188e-07, 1.0203930145280538e-07, 9.903969555580261e-08, 9.6220787781931e-08, 9.34810771013872e-08, 9.08147015366012e-08, 8.832871145614263e-08, 8.593190869987666e-08, 8.361986658655951e-08, 8.139286222785813e-08, 7.92468100030419e-08, 7.72103021517978e-08, 7.522093881107139e-08, 7.330582008080455e-08, 7.147954761421715e-08, 6.9743428809943e-08, 6.803624330586899e-08, 6.638149585569408e-08, 6.484098946657468e-08, 6.330483159899813e-08, 6.189679169210649e-08, 6.05137131515221e-08, 5.920163914652221e-08, 5.792503898760515e-08, 5.672782776855456e-08, 5.558667126592809e-08, 5.44546843173066e-08, 5.340790920627114e-08, 5.238909750460152e-08, 5.134407743412339e-08, 5.0338048396270096e-08, 4.946273435280091e-08, 4.85297739771795e-08, 4.76693777784476e-08, 4.681110965520929e-08, 4.6019138721931085e-08, 4.519185026197192e-08, 4.43867094190864e-08, 4.363825212294614e-08, 4.289133670454248e-08, 4.2157473956194735e-08, 4.1478152468243934e-08, 4.084062155129686e-08, 4.0209688023651324e-08, 3.9600433154873826e-08, 3.902007605915969e-08, 3.8417450554106836e-08, 3.788151659023242e-08, 3.727152986243709e-08, 3.675001991609861e-08, 3.624609590247019e-08, 3.572578322064146e-08, 3.52070514963998e-08, 3.474415066762049e-08, 3.429912354135922e-08, 3.382582747235574e-08, 3.338125509344536e-08, 3.2957409246137104e-08, 3.256053915379198e-08, 3.2127971394402266e-08, 3.172887375058053e-08, 3.1351326867934404e-08, 3.0981894383330655e-08, 3.0620434188222134e-08, 3.0252333971247936e-08, 2.988303648976398e-08, 2.9557785552469795e-08, 2.9221210340324433e-08, 2.8905319027217047e-08, 2.8566066845314708e-08, 2.8270669361063483e-08, 2.7975339378372155e-08, 2.7661682722168734e-08, 2.739670179607856e-08]
[0.7277806401252747, 0.7237828373908997, 0.7229742407798767, 0.7225814461708069, 0.7208737730979919, 0.724668562412262, 0.7217382788658142, 0.7235245704650879, 0.723173975944519, 0.7224273681640625, 0.7231232523918152, 0.7228516936302185, 0.721382737159729, 0.7204834222793579, 0.7193401455879211, 0.7187126278877258, 0.7181557416915894, 0.719413161277771, 0.7163903713226318, 0.7199003100395203, 0.7200343012809753, 0.7221277356147766, 0.7189021110534668, 0.7195176482200623, 0.7228214740753174, 0.7218106389045715, 0.7224704027175903, 0.7239954471588135, 0.7233566045761108, 0.7210259437561035, 0.7250971794128418, 0.724997878074646, 0.729692280292511, 0.7301862835884094, 0.7307084202766418, 0.7290294170379639, 0.7359564304351807, 0.7366669774055481, 0.7362626791000366, 0.7372263073921204, 0.7401838898658752, 0.7449759840965271, 0.7428988814353943, 0.7466735243797302, 0.750645101070404, 0.7532702088356018, 0.7537283301353455, 0.7600654363632202, 0.7605003714561462, 0.7614153623580933, 0.7628949284553528, 0.7632784247398376, 0.7681969404220581, 0.7703646421432495, 0.7713328003883362, 0.7788006663322449, 0.7824925780296326, 0.7849884033203125, 0.7862728238105774, 0.7900934219360352, 0.7906644344329834, 0.7954111695289612, 0.8002703189849854, 0.8041645884513855, 0.8032470345497131, 0.8098046183586121, 0.8144876956939697, 0.817592978477478, 0.8199246525764465, 0.8239104151725769, 0.8243370056152344, 0.8319922685623169, 0.8372460007667542, 0.8397442102432251, 0.8396029472351074, 0.8471958637237549, 0.8526838421821594, 0.8569004535675049, 0.8603065013885498, 0.8630775213241577, 0.8702932000160217, 0.8730695843696594, 0.87721848487854, 0.8826472759246826, 0.8904308080673218, 0.8930343985557556, 0.8985621929168701, 0.8946399092674255, 0.9056724905967712, 0.9081135988235474, 0.9111186265945435, 0.9146560430526733, 0.9243482351303101, 0.9298030734062195, 0.9316032528877258, 0.9355087876319885, 0.9408273696899414, 0.9452801942825317, 0.9504435658454895, 0.9575612545013428, 0.9608153700828552, 0.9633418321609497, 0.9716612100601196, 0.975380003452301, 0.9744635224342346, 0.981249213218689, 0.9850199818611145, 0.9902217388153076, 0.9977856278419495, 0.9971234798431396, 1.0051192045211792, 1.0074825286865234, 1.009062647819519, 1.0157886743545532, 1.019622564315796, 1.023409128189087, 1.027969479560852, 1.0364516973495483, 1.0372600555419922, 1.040261149406433, 1.0454070568084717, 1.0504752397537231, 1.054610013961792, 1.0539891719818115, 1.059937596321106, 1.065779685974121, 1.0689177513122559, 1.0716485977172852, 1.0788601636886597, 1.078978419303894, 1.0862293243408203, 1.086309790611267, 1.0892810821533203, 1.0904265642166138, 1.0960441827774048, 1.1000351905822754, 1.10391366481781, 1.104947805404663, 1.110566258430481, 1.1105866432189941, 1.1177762746810913, 1.1184107065200806, 1.1214041709899902, 1.126079797744751, 1.1287438869476318, 1.132355809211731, 1.1338913440704346, 1.1368657350540161, 1.1389113664627075, 1.1437417268753052, 1.1471023559570312, 1.1497037410736084, 1.1530427932739258, 1.1551964282989502, 1.1575735807418823, 1.1600499153137207, 1.1621562242507935, 1.1652089357376099, 1.16932213306427, 1.172340989112854, 1.1752992868423462, 1.1784769296646118, 1.1810916662216187, 1.1821258068084717, 1.1852445602416992, 1.1882104873657227, 1.1907697916030884, 1.19419527053833, 1.195947289466858, 1.1991499662399292, 1.2010524272918701, 1.2041276693344116, 1.2052905559539795, 1.207612156867981, 1.2104088068008423, 1.2125649452209473, 1.2160522937774658, 1.217936396598816, 1.220235824584961, 1.2228890657424927, 1.2248270511627197, 1.2270902395248413, 1.2293901443481445, 1.2319971323013306, 1.234204649925232, 1.235810399055481, 1.238433599472046, 1.2407100200653076, 1.2432548999786377, 1.2454700469970703, 1.248374581336975, 1.2499998807907104, 1.252607822418213, 1.254115104675293, 1.2563406229019165, 1.2584539651870728, 1.260534644126892, 1.2625737190246582, 1.2647770643234253, 1.2668631076812744, 1.268998622894287, 1.271277666091919, 1.273230791091919, 1.275525450706482, 1.2775250673294067, 1.2790910005569458, 1.281187653541565, 1.2830055952072144, 1.2850781679153442, 1.286845088005066, 1.2885794639587402, 1.2907723188400269, 1.2925931215286255, 1.2943544387817383, 1.296325922012329, 1.298069953918457, 1.299559473991394, 1.301422357559204, 1.3030704259872437, 1.3049684762954712, 1.3067020177841187, 1.3083785772323608, 1.3100590705871582, 1.3118118047714233, 1.3134013414382935, 1.3148311376571655, 1.3159736394882202, 1.3173317909240723, 1.3188798427581787, 1.3203022480010986, 1.321519374847412, 1.3231630325317383, 1.3247054815292358, 1.3261295557022095, 1.3276476860046387, 1.3290718793869019, 1.3307249546051025, 1.331874132156372, 1.3332345485687256, 1.3346704244613647, 1.3363161087036133, 1.3375858068466187, 1.3388527631759644, 1.3404812812805176, 1.3416365385055542, 1.343371033668518, 1.3449331521987915, 1.3465319871902466, 1.3481091260910034, 1.3495529890060425, 1.351344347000122, 1.352974772453308, 1.354684591293335, 1.356558084487915, 1.3579503297805786, 1.3591481447219849, 1.3609427213668823, 1.3624732494354248, 1.3642598390579224, 1.3655227422714233, 1.3671711683273315, 1.3683744668960571, 1.3695889711380005, 1.3708009719848633, 1.3721117973327637, 1.3733943700790405, 1.374636173248291, 1.3762603998184204, 1.3776544332504272, 1.3790898323059082, 1.3804823160171509, 1.3816100358963013, 1.3829909563064575, 1.3838598728179932, 1.385235071182251, 1.3866891860961914, 1.387903094291687, 1.38885498046875, 1.3900747299194336, 1.3915284872055054, 1.3925224542617798, 1.3936065435409546, 1.3948516845703125, 1.396194338798523, 1.3971298933029175, 1.3981401920318604, 1.399363398551941, 1.4006779193878174, 1.401652455329895, 1.4027578830718994, 1.4035272598266602, 1.4048917293548584, 1.405860424041748, 1.4068888425827026, 1.4078757762908936, 1.4089924097061157, 1.410159707069397, 1.410911202430725, 1.412055253982544, 1.4133474826812744]
------------------
Loss: 1.4133474826812744
Accuracy of Naive Bayes : 0.6031746031746031
Accuracy of Logistic: 0.49206349206349204
Accuracy of Random Forest: 0.5396825396825397
Accuracy of KNN: 0.65625
Accuracy of TF-IDF: 0.6190476190476191
Accuracy of Gradient Boosting: 0.5714285714285714
Accuracy of CNN: 0.4920634925365448

Confronto tra i classificatori e le loro accuratezze

Grazie alle varie accuratezze, è possibile andare ad analizzare quale metodo sia il più efficace avendo solamente 312 righe nel dataset. È possibile notare che, nonostante i molteplici metodi utilizzati, e avendo poche righe dove poter allenarsi, il metodo di categorizzazione con l'accuratezza più elevata è il metodo KNN.

Visualizzazione delle parole in modo grafico

In [39]:
import nltk
from nltk.tokenize import word_tokenize
from nltk.probability import FreqDist
from wordcloud import WordCloud

all_news = ' '.join([text for text in dfDayly['News']])
tokens = word_tokenize(all_news)
freq_dist = FreqDist(tokens)

wordcloud = WordCloud(width=800, height=400, background_color='white').generate_from_frequencies(freq_dist)

plt.figure(figsize=(10, 5))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()
In [40]:
all_news_val1 = ' '.join([text for text in dfDayly[dfDayly['BO-BC'] == 1]['News']])
tokens_va1 = word_tokenize(all_news_val1)
freq_dist_val1 = FreqDist(tokens_va1)

wordcloud = WordCloud(width=800, height=400, background_color='white').generate_from_frequencies(freq_dist_val1)

plt.figure(figsize=(10, 5))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()
In [41]:
all_news_val0 = ' '.join([text for text in dfDayly[dfDayly['BO-BC'] == 0]['News']])
tokens_val0 = word_tokenize(all_news_val0)
freq_dist_val0 = FreqDist(tokens_val0)

wordcloud = WordCloud(width=800, height=400, background_color='white').generate_from_frequencies(freq_dist_val0)

plt.figure(figsize=(10, 5))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()